Influencer Brand Safety Intelligence MCP Server
Influencer brand safety screening for AI agents and brand teams — this MCP server delivers automated creator vetting, controversy risk analysis, audience authenticity scoring, and sanctions screening through 8 callable tools that orchestrate 9 parallel data sources. Built for brand marketers, influencer agencies, legal and compliance teams, and PR risk managers who need structured, data-driven intelligence before committing partnership budgets.
Maintenance Pulse
90/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| creator_brand_safety_screen | Screen creator for brand safety risks | $0.10 |
| audience_authenticity_check | Verify audience authenticity and engagement quality | $0.08 |
| controversy_risk_analysis | Analyze controversy risk from multiple sources | $0.12 |
| platform_diversification_score | Assess creator platform diversification | $0.06 |
| historical_content_audit | Audit historical content via Wayback Machine | $0.10 |
| sanctions_watchlist_check | Screen against OFAC and OpenSanctions | $0.08 |
| compare_creators | Compare multiple creators on brand safety metrics | $0.20 |
| brand_fit_assessment | Comprehensive brand fit and safety assessment | $0.25 |
Example: 100 events = $10.00 · 1,000 events = $100.00
Connect to your AI agent
Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.
https://ryanclinton--influencer-brand-safety-intelligence-mcp.apify.actor/mcp{
"mcpServers": {
"influencer-brand-safety-intelligence-mcp": {
"url": "https://ryanclinton--influencer-brand-safety-intelligence-mcp.apify.actor/mcp"
}
}
}Documentation
Influencer brand safety screening for AI agents and brand teams — this MCP server delivers automated creator vetting, controversy risk analysis, audience authenticity scoring, and sanctions screening through 8 callable tools that orchestrate 9 parallel data sources. Built for brand marketers, influencer agencies, legal and compliance teams, and PR risk managers who need structured, data-driven intelligence before committing partnership budgets.
The server runs on Apify's Standby infrastructure and exposes an HTTP endpoint that any MCP-compatible client can call. Each tool dispatches parallel requests across social media, review platforms, sanctions lists, historical archives, and web presence data — then runs them through 4 scoring models to produce a Composite Brand Fit Score (0-100) with a 5-tier partnership verdict. There is no subscription, no monthly fee, and no minimum commitment.
What data can you extract?
| Data Point | Source | Example |
|---|---|---|
| 📡 Social media posts and engagement patterns | Bluesky Social Search | 47 posts, 12 controversy flags detected |
| ⭐ Consumer review sentiment | Trustpilot Review Analyzer | 3.1/5 avg, 38% negative reviews |
| 🌐 Cross-platform review aggregation | Multi-Review Analyzer | Google + Yelp: 142 reviews analyzed |
| 🛡️ US Treasury SDN list screening | OFAC Sanctions Search | 0 matches (CLEAR) / 1 match (REVIEW_REQUIRED) |
| 🌍 Multi-jurisdiction watchlist and PEP flags | OpenSanctions Search | 100+ international sanctions databases queried |
| 💬 Tech community reputation and discussions | Hacker News Search | 8 HN threads found, 3 controversy flags |
| 🗂️ Historical web archive snapshots | Wayback Machine Search | 34 archived pages, 2 deleted content flags |
| 📞 Creator web presence and contact verification | Website Contact Scraper | Professional website verified, contact info found |
| 📄 Website content analysis for brand alignment | Website Content to Markdown | Full site content extracted and keyword-scanned |
| 🎯 Composite Brand Fit Score | All 9 sources combined | Score: 18/100 — Verdict: BRAND_SAFE |
| 📋 Dimensional risk scores | 4 scoring models | Safety 87, Authenticity 74, Controversy 12, History 5 |
| 🚩 Risk signals and recommendations | Automated analysis | "Clean review profile — positive brand association" |
❓ Why use Influencer Brand Safety Intelligence MCP?
Manual influencer vetting is slow, inconsistent, and expensive. Reviewing social media history, checking sanctions databases, auditing archived content, and assessing audience quality for a single creator typically takes 2-4 hours per candidate — and most teams skip steps that later surface as brand crises. Agencies managing 50+ creator partnerships per month face a review backlog that makes thorough vetting impractical at scale.
This MCP server automates the entire influencer brand safety workflow. Plug it into Claude, Cursor, Windsurf, Cline, or any HTTP-capable AI agent, and your AI assistant can run structured creator vetting as a natural part of campaign planning — no custom code required.
- Scheduling — run recurring safety screens on live partnerships to detect emerging controversy or declining authenticity
- API access — trigger brand safety checks from Python, JavaScript, or any HTTP client in CI/CD pipelines or influencer management tools
- Proxy rotation — data collection runs on Apify's proxy infrastructure to ensure reliable access at scale
- Monitoring — configure Slack or email alerts when screenings surface new risk signals
- Integrations — connect results to Zapier, Make, Google Sheets, HubSpot, or influencer CRM platforms via webhooks
Features
- 8 targeted MCP tools covering every phase of creator vetting: brand safety screen, audience authenticity check, controversy risk analysis, platform diversification score, historical content audit, sanctions watchlist check, multi-creator comparison, and full brand fit assessment
- 9 parallel data sources queried simultaneously to minimize latency — Bluesky, Trustpilot, Multi-Review Analyzer, OFAC, OpenSanctions, Hacker News, Wayback Machine, Website Contact Scraper, and Website Content to Markdown
- 17 controversy keyword patterns scanned across all social and historical content including: scandal, cancel, boycott, harassment, fraud, lawsuit, and 11 additional high-risk terms
- 12 brand-unsafe content categories flagged independently from controversy signals: drug, violence, gambling, adult content, extremism, terrorism, conspiracy, misinformation, and 4 additional categories
- 8 audience inauthenticity signals detected: bot activity, fake followers, bought followers, engagement pods, click farms, and 3 related terms
- 4 independent scoring models combining into a composite score: Brand Safety (max 100, high = safer), Audience Authenticity (max 100, high = more authentic), Controversy Risk (max 100, high = riskier), Historical Content Risk (max 100, high = riskier)
- Composite Brand Fit Score weighted formula: (100-brandSafety) x 0.25 + (100-authenticity) x 0.20 + controversyRisk x 0.30 + historicalRisk x 0.25 — controversy weighted highest
- 5-tier partnership verdicts on composite score: BRAND_SAFE (0-19), APPROVED (20-39), CONDITIONAL (40-59), HIGH_RISK (60-79), DO_NOT_PARTNER (80-100)
- Sanctions confidence thresholding — OFAC matches flagged only when match score reaches 60+ to reduce false positives on common names
- Platform diversification scoring — active platforms x 25, reporting HIGHLY_DIVERSIFIED, DIVERSIFIED, MODERATE, or SINGLE_PLATFORM with deplatforming risk assessment
- Historical content deletion detection — Wayback Machine HTTP 404 and 410 status codes used to identify pages that have been actively removed
- Optional website URL parameter on authenticity and diversification tools — providing a direct URL improves website verification accuracy vs. name-based lookup
- Spending limit controls — each tool checks charge limits before execution and returns a structured error if the per-run budget is reached
Use cases for influencer brand safety screening
Pre-campaign creator vetting
Brand managers and marketing directors screening creator shortlists before committing partnership budgets. Run brand_fit_assessment on the top 5-10 candidates to get composite scores and verdicts in minutes rather than days. Structured JSON output integrates directly with campaign planning spreadsheets or influencer CRMs.
Influencer agency portfolio management
Agencies managing dozens of active creator partnerships need ongoing risk monitoring, not just point-in-time vetting. Schedule creator_brand_safety_screen to run weekly or monthly on all roster members, and use webhooks to pipe results into Slack channels or HubSpot when risk signals change.
Legal and compliance review
Enterprise brands with in-house legal teams need documented evidence of sanctions screening before signing talent contracts. sanctions_watchlist_check screens both OFAC SDN and OpenSanctions in one call and returns a structured BLOCKED / REVIEW_REQUIRED / CLEAR verdict that can be archived as a compliance record alongside the contract.
Crisis prevention and PR risk management
PR teams vetting creators who are inbound for brand deals or who have been proposed by media buyers. controversy_risk_analysis surfaces current social controversy, historical web archive flags, and sanctions exposure in a single structured response — giving PR teams documented risk justification for rejecting or accepting creators.
Multi-creator shortlist comparison
Media planners comparing creator candidates for campaign slots. compare_creators accepts 2-5 names and returns a ranked comparison sorted by composite brand safety score, with per-dimension ratings (safety level, authenticity level, controversy level, historical risk level) for side-by-side evaluation.
AI agent workflow integration
AI development teams building autonomous marketing agents, influencer outreach bots, or brand safety automation pipelines. Because this is an MCP server, AI agents like Claude can call creator vetting tools as native capabilities — no custom API wrapper code required.
How to screen a creator for brand safety
- Connect the MCP server — Add the server URL to your MCP client config (Claude Desktop, Cursor, Windsurf, or any compatible client). See the connection instructions below.
- Ask your AI assistant to vet a creator — Type a natural language prompt such as "Screen @jasminewrites for brand safety and tell me if she's safe for a partnership." The AI will automatically select and call the appropriate tools.
- Review the structured output — The tool returns scores, verdicts, and a list of specific risk signals within 30-90 seconds. Signals are plain-language descriptions you can copy directly into a vetting report.
- Run a full brand fit assessment for final candidates — Use
brand_fit_assessmenton shortlisted creators for a comprehensive 9-source report before budget commitment.
⬇️ MCP tools
| Tool | Price | Description |
|---|---|---|
creator_brand_safety_screen | $0.045 | Brand safety scan via Bluesky, Trustpilot, multi-platform reviews, and Hacker News. Returns safety score 0-100 and safety level |
audience_authenticity_check | $0.045 | Engagement quality, cross-platform presence, and website verification. Returns authenticity score and level |
controversy_risk_analysis | $0.045 | Social controversy, Hacker News discussion, Wayback Machine archive flags, OFAC, and OpenSanctions. Returns controversy score and risk level |
platform_diversification_score | $0.045 | Active platforms x 25 score. Returns HIGHLY_DIVERSIFIED / DIVERSIFIED / MODERATE / SINGLE_PLATFORM |
historical_content_audit | $0.045 | Wayback Machine archive scan for deleted and modified content with HTTP status code analysis |
sanctions_watchlist_check | $0.045 | OFAC SDN + OpenSanctions dual-source screening. Returns BLOCKED / REVIEW_REQUIRED / CLEAR verdict |
compare_creators | $0.045 | Rank 2-5 creators by composite brand safety score. Returns sorted comparison with dimension ratings |
brand_fit_assessment | $0.045 | Full 9-source analysis combining all 4 scoring models. Returns composite score, verdict, and recommendations |
Tool parameters
| Tool | Parameter | Type | Required | Description |
|---|---|---|---|---|
creator_brand_safety_screen | creator | string | Yes | Creator name, handle, or brand to screen |
audience_authenticity_check | creator | string | Yes | Creator name or handle |
audience_authenticity_check | website | string | No | Creator website URL — improves verification accuracy |
controversy_risk_analysis | creator | string | Yes | Creator name or entity to analyze |
platform_diversification_score | creator | string | Yes | Creator name or handle |
platform_diversification_score | website | string | No | Creator website URL |
historical_content_audit | creator | string | Yes | Creator name or website URL to audit |
sanctions_watchlist_check | entity | string | Yes | Creator name or entity to screen |
compare_creators | creators | string[] | Yes | Array of 2-5 creator names to compare |
brand_fit_assessment | creator | string | Yes | Creator name or handle to assess |
⬆️ Output example
Full output from brand_fit_assessment for a creator named "Jordan Westfield":
{
"entity": "Jordan Westfield",
"compositeScore": 22,
"verdict": "APPROVED",
"brandSafety": {
"score": 82,
"controversyFlags": 0,
"unsafeContentFlags": 0,
"safetyLevel": "BRAND_SAFE",
"signals": [
"No controversy or unsafe content flags — brand-safe profile",
"Clean review profile — positive brand association"
]
},
"audienceAuthenticity": {
"score": 71,
"engagementQuality": 28,
"authenticityIndicators": 6,
"authenticityLevel": "AUTHENTIC",
"signals": [
"Professional website verified — legitimate creator presence",
"Multi-platform presence verified — strong authenticity indicator",
"3 HN discussions — genuine community presence"
]
},
"controversyRisk": {
"score": 6,
"socialControversy": 0,
"historicalFlags": 0,
"sanctionFlags": 0,
"riskLevel": "CLEAN",
"signals": []
},
"historicalContent": {
"score": 0,
"archivedPages": 14,
"deletedContentFlags": 0,
"contentRiskLevel": "CLEAN",
"signals": []
},
"allSignals": [
"No controversy or unsafe content flags — brand-safe profile",
"Clean review profile — positive brand association",
"Professional website verified — legitimate creator presence",
"Multi-platform presence verified — strong authenticity indicator",
"3 HN discussions — genuine community presence"
],
"recommendations": []
}
High-risk example output from controversy_risk_analysis:
{
"score": 68,
"socialControversy": 7,
"historicalFlags": 3,
"sanctionFlags": 0,
"riskLevel": "HIGH",
"signals": [
"7 controversy mentions across platforms — pattern of controversial content",
"3 historical content flags — deleted/modified controversial content detected"
]
}
Sanctions screening output from sanctions_watchlist_check:
{
"entity": "Viktor Kravchenko",
"hits": 1,
"matches": [
{
"source": "OpenSanctions",
"datasets": ["us_ofac_sdn", "eu_fsf_sanctions"],
"name": "Viktor V. Kravchenko"
}
],
"verdict": "REVIEW_REQUIRED",
"signals": [
"1 sanctions/watchlist matches — partnership must not proceed without legal review"
]
}
Output fields
| Field | Type | Description |
|---|---|---|
entity | string | The creator name that was screened |
compositeScore | number | Overall risk score 0-100 (lower = safer for brand partnerships) |
verdict | string | BRAND_SAFE / APPROVED / CONDITIONAL / HIGH_RISK / DO_NOT_PARTNER |
brandSafety.score | number | Brand safety score 0-100 (higher = safer) |
brandSafety.controversyFlags | number | Count of controversy keyword hits in social content |
brandSafety.unsafeContentFlags | number | Count of brand-unsafe content keyword hits |
brandSafety.safetyLevel | string | UNSAFE / HIGH_RISK / CAUTION / SAFE / BRAND_SAFE |
brandSafety.signals | string[] | Plain-language risk and positive signal descriptions |
audienceAuthenticity.score | number | Authenticity score 0-100 (higher = more authentic) |
audienceAuthenticity.engagementQuality | number | Engagement quality sub-score (max 35) |
audienceAuthenticity.authenticityIndicators | number | Count of positive authenticity indicators |
audienceAuthenticity.authenticityLevel | string | FAKE / SUSPICIOUS / MIXED / AUTHENTIC / VERIFIED |
controversyRisk.score | number | Controversy risk score 0-100 (higher = riskier) |
controversyRisk.socialControversy | number | Count of controversy items across social platforms |
controversyRisk.historicalFlags | number | Count of controversy flags in web archive |
controversyRisk.sanctionFlags | number | Count of OFAC and OpenSanctions matches |
controversyRisk.riskLevel | string | CLEAN / LOW / MODERATE / HIGH / CRITICAL |
historicalContent.score | number | Historical content risk score 0-100 |
historicalContent.archivedPages | number | Total archived pages found in Wayback Machine |
historicalContent.deletedContentFlags | number | Count of pages with 404/410 HTTP status |
historicalContent.contentRiskLevel | string | CLEAN / MINOR / NOTABLE / HIGH / CRITICAL |
allSignals | string[] | Deduplicated list of all risk and positive signals across all dimensions |
recommendations | string[] | Actionable partnership recommendations based on risk findings |
platforms | object | Per-platform presence map (diversification tool) |
diversificationScore | number | Active platforms x 25, max 100 |
level | string | HIGHLY_DIVERSIFIED / DIVERSIFIED / MODERATE / SINGLE_PLATFORM |
hits | number | Total sanctions/watchlist match count |
matches[].source | string | OFAC or OpenSanctions |
matches[].score | number | OFAC match confidence score (0-100) |
matches[].datasets | string[] | OpenSanctions dataset identifiers |
How much does it cost to screen influencers for brand safety?
This MCP uses pay-per-event pricing — you pay $0.045 per tool call. Platform compute costs are included. There is no monthly subscription.
| Scenario | Tool calls | Cost per call | Total cost |
|---|---|---|---|
| Quick safety check | 1 | $0.045 | $0.045 |
| Vet 10 creators (basic screen each) | 10 | $0.045 | $0.45 |
| Full brand fit assessment x 5 creators | 5 | $0.045 | $0.225 |
| Compare two shortlists of 5 (compare_creators x 2) | 2 | $0.045 | $0.09 |
| Monthly monitoring of 50 partnerships (weekly screen) | 200 | $0.045 | $9.00 |
You can set a maximum spending limit per run to control costs. The MCP checks the limit before each tool execution and stops gracefully when your budget is reached.
Compare this to manual creator vetting agencies at $500-1,500 per creator audit, or enterprise influencer risk platforms at $800-3,000/month — with this MCP, monthly brand safety programs typically cost under $10 with no subscription commitment. Apify's free tier includes $5 of monthly credits, covering over 100 individual tool calls.
How to connect this MCP server
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"influencer-brand-safety": {
"url": "https://influencer-brand-safety-intelligence-mcp.apify.actor/mcp",
"headers": {
"Authorization": "Bearer YOUR_APIFY_TOKEN"
}
}
}
}
Cursor / Windsurf / Cline
Add the MCP server endpoint in your IDE's MCP settings panel:
- URL:
https://influencer-brand-safety-intelligence-mcp.apify.actor/mcp - Auth: Bearer token (your Apify API token)
Python
import httpx
import json
APIFY_TOKEN = "YOUR_APIFY_TOKEN"
MCP_URL = "https://influencer-brand-safety-intelligence-mcp.apify.actor/mcp"
def screen_creator(creator_name: str) -> dict:
payload = {
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "brand_fit_assessment",
"arguments": {"creator": creator_name}
},
"id": 1
}
response = httpx.post(
MCP_URL,
json=payload,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {APIFY_TOKEN}"
},
timeout=120
)
result = response.json()
content = json.loads(result["result"]["content"][0]["text"])
print(f"Creator: {content['entity']}")
print(f"Verdict: {content['verdict']} | Score: {content['compositeScore']}/100")
for signal in content.get("allSignals", []):
print(f" - {signal}")
return content
screen_creator("Jordan Westfield")
JavaScript
const APIFY_TOKEN = "YOUR_APIFY_TOKEN";
const MCP_URL = "https://influencer-brand-safety-intelligence-mcp.apify.actor/mcp";
async function screenCreator(creatorName) {
const response = await fetch(MCP_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${APIFY_TOKEN}`,
},
body: JSON.stringify({
jsonrpc: "2.0",
method: "tools/call",
params: {
name: "brand_fit_assessment",
arguments: { creator: creatorName },
},
id: 1,
}),
});
const result = await response.json();
const content = JSON.parse(result.result.content[0].text);
console.log(`Creator: ${content.entity}`);
console.log(`Verdict: ${content.verdict} | Score: ${content.compositeScore}/100`);
for (const signal of content.allSignals) {
console.log(` - ${signal}`);
}
return content;
}
screenCreator("Jordan Westfield");
cURL
# Full brand fit assessment
curl -X POST "https://influencer-brand-safety-intelligence-mcp.apify.actor/mcp" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "brand_fit_assessment",
"arguments": {"creator": "Jordan Westfield"}
},
"id": 1
}'
# Quick sanctions check only
curl -X POST "https://influencer-brand-safety-intelligence-mcp.apify.actor/mcp" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "sanctions_watchlist_check",
"arguments": {"entity": "Jordan Westfield"}
},
"id": 2
}'
How Influencer Brand Safety Intelligence MCP works
Phase 1: Parallel data collection
When a tool is called, the actor-client module dispatches simultaneous requests to up to 9 sub-actors using Promise.all. Each sub-actor runs with 512 MB memory and a 120-second timeout. The brand_fit_assessment tool queries all 9 sources in parallel: Bluesky social search (recent posts and engagement metrics), Trustpilot (consumer review sentiment), Multi-Review Analyzer (cross-platform reviews from Google and Yelp), OFAC Sanctions Search (US Treasury SDN list), OpenSanctions (100+ international jurisdiction databases), Hacker News Search (YC community discussions and points), Wayback Machine (historical archive snapshots), Website Contact Scraper (contact and presence verification), and Website Content to Markdown (full site content for keyword analysis).
Phase 2: Keyword scanning and flag accumulation
Once data is collected, four independent scoring functions process the results. Brand Safety scanning checks all Bluesky posts and HN items against two keyword lists: 17 controversy terms (scandal, cancel, boycott, harassment, etc.) and 12 brand-unsafe content terms (drug, violence, gambling, adult, extremism, etc.). Audience Authenticity scanning checks social posts for 8 inauthenticity patterns (bot, fake, bought followers, engagement pod, click farm, etc.) and evaluates like-to-reply ratios as engagement quality proxies. Historical Content analysis checks Wayback Machine URLs and content against both keyword lists and reads HTTP status codes — 404 and 410 responses indicate actively deleted pages, which score as deletedContentFlags. Sanctions screening compares OFAC match scores against a 60-point confidence threshold to minimize false positives on common names.
Phase 3: Score computation and verdict generation
Each scoring model applies a weighted formula capped at 100. The composite score formula weights controversy risk most heavily at 30%: (100 - brandSafety) × 0.25 + (100 - authenticity) × 0.20 + controversyRisk × 0.30 + historicalRisk × 0.25. Because brand safety and authenticity are inverted (high = good), they are subtracted from 100 before weighting. The final composite score maps to verdicts: BRAND_SAFE (0-19), APPROVED (20-39), CONDITIONAL (40-59), HIGH_RISK (60-79), DO_NOT_PARTNER (80-100). Signals from all four models are merged into allSignals and five specific risk conditions trigger automatic recommendations — including forced recommendations for sanctions matches and critical controversy findings.
Phase 4: MCP transport and response
The server uses @modelcontextprotocol/sdk v1.12+ with StreamableHTTPServerTransport in stateless mode (no persistent session). A new McpServer instance is created per POST request to /mcp. The server runs on Apify Standby infrastructure, meaning it stays alive and responds to requests without cold-start latency after the first activation. Non-standby runs (direct actor invocations) start a temporary listener, print a health check message, and exit after 1 second.
Tips for best results
-
Use
brand_fit_assessmentfor final-stage vetting only. It queries 9 sources and costs $0.045 — but for shortlist screening, usecreator_brand_safety_screenorcontroversy_risk_analysisfirst to eliminate obvious risks before committing to the full assessment. -
Provide the
websiteparameter when you have it. Theaudience_authenticity_checkandplatform_diversification_scoretools become significantly more accurate when given a direct creator website URL rather than relying on name-based lookup, because website content is fetched and analyzed directly rather than inferred from search results. -
Use
compare_creatorsbefore single assessments. When evaluating a shortlist of 3-5 creators,compare_creatorsruns parallel assessments and returns a ranked list — it is faster than sequential individual assessments and provides relative scoring context. -
Schedule recurring screens on active partnerships. A creator who passes initial vetting can develop controversy months later. Use Apify Scheduling to run
creator_brand_safety_screenmonthly on all active partnership creators, and configure webhooks to alert your team when signals change. -
Document sanctions checks separately. For enterprise compliance requirements, run
sanctions_watchlist_checkas a distinct step and save the JSON output with the contract. The structured BLOCKED / REVIEW_REQUIRED / CLEAR verdict provides a documented compliance record. -
Set a spending limit per run. If your AI agent loops over a large creator list, configure
maxTotalChargeUsdto cap spending. The MCP returns a structured error object rather than crashing when the limit is reached, so your agent can handle it gracefully. -
Cross-reference with Trustpilot Review Analyzer. For brand partnerships in consumer-facing industries, pull Trustpilot data independently with the full analyzer for deeper review sentiment — the brand safety screen uses a lighter review query.
Combine with other Apify actors
| Actor | How to combine |
|---|---|
| Trustpilot Review Analyzer | Pull a full review dataset for any creator or brand entity to supplement the brand safety score with deep sentiment trend analysis |
| Multi-Review Analyzer | Run the full multi-platform review analysis (Google, Yelp, BBB) independently for deeper reputation intelligence on creators with significant business operations |
| Website Contact Scraper | Extract full contact and team information from a creator's website after a positive brand fit assessment to initiate outreach |
| Website Content to Markdown | Convert a creator's full website content to markdown for LLM analysis of brand values alignment, tone of voice, and audience targeting fit |
| Website Tech Stack Detector | Assess creator platform sophistication by detecting CMS, analytics, and monetization tools in use on their website |
| B2B Lead Qualifier | After brand fit screening, score creator prospects against your ideal partner profile (audience size, niche alignment, engagement rate) before outreach |
| HubSpot Lead Pusher | Push vetted creator records directly into HubSpot as contacts with brand safety scores attached as CRM properties |
Limitations
- Bluesky-only social media analysis. The server queries Bluesky for social content. Instagram, TikTok, YouTube, X/Twitter, and LinkedIn are not analyzed. Creators with minimal Bluesky presence will produce lower-confidence social scores. For Instagram and TikTok-heavy creators, supplement with manual profile review.
- Name-based matching has false positive risk. Creator names are used as search queries across all data sources. Common names may surface controversy or sanctions data belonging to different individuals. Always review the raw
signalsarray before acting on a high-risk verdict for a creator with a common name. - Review data depends on creator business profile. Trustpilot and multi-review data is most useful for creators who operate a business (courses, products, services). Personal lifestyle creators without commercial products will often return empty review datasets, which reduces review-based scoring confidence.
- Hacker News relevance is niche-specific. HN discussion data is most relevant for technology, startup, and developer-facing creators. Lifestyle, fashion, and entertainment creators are unlikely to appear in HN search results, which limits the HN scoring dimension's usefulness for those niches.
- Historical archive coverage is incomplete. Wayback Machine coverage depends on crawl frequency. Creator websites that were not frequently crawled may have limited snapshots. The absence of archive data does not confirm a clean history.
- Sanctions screening is name-based, not identity-verified. OFAC and OpenSanctions matches are flagged at a 60-point confidence threshold but are not identity-confirmed. Any REVIEW_REQUIRED verdict should be reviewed by a compliance professional before contract rejection.
- No real-time social monitoring. This tool performs point-in-time analysis. It does not watch for new controversy posts between runs. Use Apify Scheduling for ongoing monitoring.
- OpenSanctions and OFAC data freshness. Sanctions list data is as current as the underlying databases maintained by OFAC and the OpenSanctions project. There may be a lag between a new listing and its appearance in search results.
Integrations
- Zapier — trigger creator brand safety screens from Zapier workflows when new influencer proposals arrive in Airtable or Notion
- Make — build automated creator vetting pipelines that screen candidates and route approved creators to your CRM or rejection candidates to a review queue
- Google Sheets — push brand fit scores and verdicts into a creator tracking spreadsheet for team review and sign-off workflows
- Apify API — call tools programmatically from your influencer management platform or brand safety dashboard
- Webhooks — receive notifications when recurring safety screens detect new risk signals on active creator partnerships
- LangChain / LlamaIndex — use brand safety tool results as retrieval-augmented context in AI agents that draft partnership briefs or risk memos
Troubleshooting
-
All scores return zero or minimal values. This typically means the creator name returned no results from the underlying data sources. Try the creator's full legal name, their primary handle with @, or their brand/company name. Some creators have no Bluesky presence, no reviews, and no HN discussions — the MCP cannot score what it cannot find.
-
REVIEW_REQUIRED verdict on a creator you know is safe. Common names trigger false-positive sanctions matches. Review the
matchesarray in thesanctions_watchlist_checkresponse — compare the matched entity's full name, nationality, and dataset context against the creator's profile. If the details do not match, the hit is a false positive. OFAC confidence scores below 75 on common names should be treated as inconclusive. -
brand_fit_assessmenttaking longer than 90 seconds. The full assessment queries 9 sub-actors in parallel. During high-traffic periods on Apify infrastructure, individual sub-actors may queue. If timeouts are a recurring issue, use the lighter individual tools (creator_brand_safety_screen,controversy_risk_analysis) for rapid screening and reservebrand_fit_assessmentfor final-stage candidates. -
Audience authenticity score unexpectedly low. If you did not provide a
websiteparameter, website presence contributes zero to the authenticity score. Re-runaudience_authenticity_checkwith the creator's website URL to get a complete authenticity assessment including website verification (max 25 points) and cross-platform score (max 20 points). -
AI client not finding the MCP tools. Verify your MCP client config includes the Authorization header with a valid Apify API token. The server runs on Standby mode — if it has not been activated recently, the first request may take 5-15 seconds to initialize.
Responsible use
- This server only accesses publicly available content: public social media posts, public review pages, publicly available sanctions lists, and publicly archived web content.
- Sanctions screening results must be reviewed by a qualified compliance professional before being used to deny a contract or terminate a partnership.
- Do not use brand safety scores as the sole basis for decisions that could cause material harm to a creator's livelihood. Scores are signals to investigate, not definitive verdicts.
- Comply with GDPR, CCPA, and applicable data protection laws when storing screening results associated with named individuals.
- For guidance on web scraping legality, see Apify's guide.
❓ FAQ
How many creators can I screen with the influencer brand safety MCP in one session?
There is no hard limit per session. The compare_creators tool handles 2-5 creators per call. For larger batches, loop over creators programmatically using the Python or JavaScript examples above — each call costs $0.045 and completes in 30-90 seconds.
Does this MCP analyze Instagram, TikTok, or YouTube for brand safety? Not directly. The server analyzes Bluesky social media, Hacker News, review platforms, and web presence. For Instagram and TikTok creators, web presence, review data, historical archives, and sanctions screening still provide meaningful safety signals, but the social content dimension will be limited.
How accurate is the audience authenticity score? Authenticity scoring uses cross-platform consistency, engagement quality (like-to-reply ratios on Bluesky), website verification, and HN community presence as proxies for authentic engagement. It is most accurate for tech and professional creators with multi-platform presence. For lifestyle and entertainment creators who primarily operate on platforms not covered here, treat the score as indicative rather than definitive.
What happens when a creator has a common name and triggers a false sanctions match?
The sanctions_watchlist_check tool returns the full matches array including matched name, source, and confidence score. For OFAC matches, scores below 75 on common names should be treated as inconclusive. A compliance review is required before any action is taken on a REVIEW_REQUIRED verdict.
How is influencer brand safety screening different from manual agency vetting? Manual agency vetting typically takes 2-4 hours per creator and costs $500-1,500 per audit, with variable consistency. This MCP runs 9 parallel data sources in under 90 seconds for $0.045, producing a consistent, structured score that is reproducible across vettings. Manual vetting still adds value for nuanced judgment calls — this tool is for systematic, data-driven risk signals.
Can I schedule recurring brand safety screens on active creator partnerships?
Yes. Use Apify's Scheduling feature to run creator_brand_safety_screen monthly or weekly on each active partnership creator. Configure webhooks to deliver results to Slack or your influencer CRM whenever a screen completes, so your team is notified of new risk signals without manual follow-up.
Is it legal to use publicly available data for influencer brand safety screening? Yes. All data sources — public social media posts, public review platforms, publicly available government sanctions lists, and publicly archived web content — are legally accessible. See Apify's guide on web scraping legality. Use screening results responsibly and in compliance with applicable privacy laws.
How does the composite brand safety score weighting work?
The composite score formula is: (100 - brandSafety) × 0.25 + (100 - authenticity) × 0.20 + controversyRisk × 0.30 + historicalRisk × 0.25. Controversy risk receives the highest weight (30%) because social controversy is the most direct predictor of brand damage. Brand safety and authenticity scores are inverted before weighting because they are "higher is better" metrics.
What MCP clients work with this server?
Any client that supports the Model Context Protocol over HTTP: Claude Desktop, Cursor, Windsurf, Cline, and any custom HTTP client using the JSON-RPC 2.0 tools/call method. See the connection examples above.
What if the brand fit assessment verdict is CONDITIONAL — should I proceed?
CONDITIONAL (composite score 40-59) means risk signals are present but not disqualifying. Review the allSignals array to understand which specific dimensions are elevated. Proceed only after addressing the specific concerns — for example, if only the authenticity score is elevated, require an independent audience audit before finalizing the contract.
How does the historical content audit detect deleted content?
The historical_content_audit tool queries Wayback Machine for archived snapshots of a creator's web presence. Pages that return HTTP 404 or 410 status codes in the archive indicate content that was actively removed. URLs and content text are also keyword-scanned against controversy and brand-unsafe term lists to detect controversial material in the archive that is no longer live.
Can I integrate influencer brand safety screening into my existing AI agent pipeline? Yes — that is the primary use case. Because this is an MCP server, AI agents with MCP support (Claude, GPT-4 with MCP integration, LangChain agents) can call brand safety tools as native capabilities. Your agent can screen creators, evaluate scores, and make routing decisions entirely within an automated workflow.
Help us improve
If you encounter issues, help us debug faster by enabling run sharing in your Apify account:
- Go to Account Settings > Privacy
- Enable Share runs with public Actor creators
This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom solutions, enterprise risk scoring configurations, or additional data source integrations, reach out through the Apify platform.
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
Bulk Email Verifier
Verify email deliverability at scale. MX record validation, SMTP mailbox checks, disposable and role-based detection, catch-all flagging, and confidence scoring. No external API costs.
GitHub Repository Search
Search GitHub repositories by keyword, language, topic, stars, forks. Sort by stars, forks, or recently updated. Returns metadata, topics, license, owner info, URLs. Free API, optional token for higher limits.
Website Content to Markdown
Convert any website to clean Markdown for RAG pipelines, LLM training, and AI apps. Crawls pages, strips boilerplate, preserves headings, tables, and code blocks. GFM support.
Website Tech Stack Detector
Detect 100+ web technologies on any website. Identifies CMS, frameworks, analytics, marketing tools, chat widgets, CDNs, payment systems, hosting, and more. Batch-analyze multiple sites with version detection and confidence scoring.
Ready to try Influencer Brand Safety Intelligence MCP Server?
Start for free on Apify. No credit card required.
Open on Apify Store