Neuromorphic Threat Intelligence MCP Server
Neuromorphic threat intelligence for AI agents — this MCP server applies **8 brain-inspired and mathematical frameworks** to cyber threat detection, pulling live data from **15 sources** spanning vulnerability databases, internet scanning infrastructure, sanctions registries, social signals, and code repositories. It gives AI assistants like Claude, Cursor, and any MCP-compatible client the ability to reason about threats the way biological neural circuits do: through spiking dynamics, temporal
Maintenance Pulse
90/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| detect-spiking-anomalies | LIF spiking neural network anomaly detection | $0.10 |
| attribute-threat-campaign | STDP learning threat attribution | $0.10 |
| analyze-attack-graph | Hypergraph grammar attack reachability | $0.10 |
| simulate-vulnerability-propagation | Contact process on scale-free network | $0.08 |
| evolve-detection-network | NEAT neuroevolution topology search | $0.12 |
| compute-attack-surface | Discrete Morse theory critical cells | $0.08 |
| assess-threat-actor-dynamics | Population game ESS replicator dynamics | $0.08 |
| forecast-exploit-emergence | Le Cam deficiency detection limits | $0.10 |
Example: 100 events = $10.00 · 1,000 events = $100.00
Connect to your AI agent
Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.
https://ryanclinton--neuromorphic-threat-intelligence-mcp.apify.actor/mcp{
"mcpServers": {
"neuromorphic-threat-intelligence-mcp": {
"url": "https://ryanclinton--neuromorphic-threat-intelligence-mcp.apify.actor/mcp"
}
}
}Documentation
Neuromorphic threat intelligence for AI agents — this MCP server applies 8 brain-inspired and mathematical frameworks to cyber threat detection, pulling live data from 15 sources spanning vulnerability databases, internet scanning infrastructure, sanctions registries, social signals, and code repositories. It gives AI assistants like Claude, Cursor, and any MCP-compatible client the ability to reason about threats the way biological neural circuits do: through spiking dynamics, temporal learning, and topological structure.
Each tool call fires parallel requests across multiple intelligence feeds and then runs a rigorous algorithm — leaky integrate-and-fire spiking networks, STDP plasticity learning, hypergraph attack grammars, SIS epidemic propagation, NEAT neuroevolution, discrete Morse theory, evolutionary population games, or Le Cam statistical deficiency — returning structured JSON with an interpretable conclusion. This is not a wrapper around a single API. It is a multi-source reasoning engine with quantitative outputs and clear threat assessments.
What data can you access?
| Data Point | Source | Example coverage |
|---|---|---|
| 📋 CVE vulnerability records | NVD CVE Search | 200,000+ CVEs with CVSS scores |
| 🔥 Known exploited vulnerabilities | CISA KEV Catalog | Actively exploited in the wild |
| 🖥 Internet host scan data | Censys Search | Open ports, services, banners |
| 🌐 DNS records | DNS Lookup | A, MX, TXT, NS, CNAME records |
| 🔒 SSL/TLS certificate history | crt.sh CT Logs | Certificate transparency log |
| 📝 Domain registration data | WHOIS Lookup | Registrant, dates, nameservers |
| 📍 IP geolocation and ASN | IP Geolocation | Country, org, ASN, coordinates |
| 🛠 Website technology stack | Tech Stack Detector | CMS, frameworks, CDNs, libraries |
| 💻 Security code and PoCs | GitHub Repo Search | Public repositories |
| 💬 Security community discussions | Hacker News Search | HN threads and comments |
| 🚫 US Treasury sanctions | OFAC Sanctions Search | SDN list, blocked persons |
| 🌍 Global watchlists | OpenSanctions Search | 100+ programs, 40+ countries |
| 📡 Social threat signals | Bluesky Social Search | Real-time security community posts |
| 🕰 Historical web snapshots | Wayback Machine | Archive.org snapshots |
| 🔄 Website content changes | Website Change Monitor | Tracked page modifications |
Why use Neuromorphic Threat Intelligence MCP Server?
Manual threat correlation is the bottleneck in every security workflow. A threat analyst checking CVEs, cross-referencing CISA KEV, querying Censys, reviewing DNS history, scanning sanctions lists, and monitoring social channels for a single domain can spend four to six hours producing a picture that is already hours old by the time it is done. Statistical detection methods are often bolted on as an afterthought, producing binary red/green outputs without quantitative confidence.
This MCP server automates the entire correlation and analysis pipeline. It queries up to seven sources in parallel, normalises severity scores from heterogeneous formats (CVSS floats, "Critical/High/Medium" strings, raw scores), assembles structured threat indicators, and passes them through the selected algorithm. The result includes a plain-language interpretation alongside the full numerical output.
- Scheduling — run periodic threat sweeps daily or weekly to track how your attack surface evolves
- API access — trigger analysis from Python, JavaScript, or any HTTP client without leaving your toolchain
- Proxy rotation — upstream actors use Apify's built-in proxy infrastructure where needed
- Monitoring — get Slack or email alerts when MCP tool calls fail or return unexpected results
- Integrations — connect outputs to Zapier, Make, webhooks, or push results directly into SIEM pipelines
Features
- Leaky integrate-and-fire (LIF) spiking network — one neuron per threat source type, membrane dynamics governed by LIF equation with tau 10–20 ms decay constants, refractory periods of 2–5 ms, and synaptic delays of 1–5 steps
- Filippov differential inclusions at switching surfaces — when membrane potential sits within 0.05 of the threshold, the system applies a convex combination of sub- and super-threshold dynamics, producing a sliding mode; the count of these switching events is a direct measure of how aggressively the boundary between normal and anomalous is being probed
- STDP learning with Tracy-Widom random matrix edge — builds a temporal weight matrix from causal spike pairs across sources; potentiation for pre-before-post sequences, depression for the reverse; the Tracy-Widom edge from random matrix theory separates structured campaign signal from random noise
- Hebbian cell assembly detection — identifies clusters of threat sources whose activity is temporally coherent above a correlation threshold, surfacing the multi-source fingerprint of coordinated campaigns
- Hypergraph attack grammar with Floyd-Warshall reachability — models infrastructure as a context-free hypergraph grammar; production rules encode exploit-to-exploit transitions; derives all reachable attack paths and computes betweenness centrality on critical assets
- Barabasi-Albert scale-free contact process — simulates vulnerability propagation on preferential-attachment networks; computes basic reproduction number R0, epidemic threshold, hub infection order, and steady-state infection fraction
- NEAT neuroevolution with speciation and fitness sharing — evolves neural network topology and weights simultaneously; innovation numbers solve the competing conventions problem; speciation protects structural diversity; trained on 6-feature threat indicator vectors
- Discrete Morse theory CW complex — models infrastructure as a CW complex with 0-cells (assets), 1-cells (connections), and 2-cells (service clusters); gradient vector field pairing reduces the complex; unpaired critical cells represent topologically irreducible attack surface; outputs Euler characteristic and Betti numbers
- Population game replicator dynamics — simulates 2,000 threat actors across five strategies (Nation-State APT, Cybercrime, Hacktivism, Insider Threat, Supply Chain); payoff matrix calibrated from real sanctions and vulnerability density; computes evolutionary stable strategies and Lyapunov exponent
- Le Cam deficiency distance — measures information loss between current and enhanced monitoring configurations; computes minimax risk, Bayes risk, power function, ROC curve, and sample complexity for reliable detection
- Parallel actor orchestration — all upstream actor calls fire simultaneously via
Promise.all; a 180-second timeout prevents hung runs from blocking the response - Severity normalisation — converts CVSS floats (÷10), severity strings (Critical→0.95, High→0.80, Medium→0.50, Low→0.20), and raw numeric scores into a unified 0–1 severity scale before any algorithm runs
- Standby mode operation — runs as a persistent Express server on the Apify platform; the
/mcpendpoint handles MCP protocol requests; health probe on/returns 200 immediately for container readiness checks - Plain-language interpretations — every tool returns an
interpretationstring with a human-readable verdict (CRITICAL / ELEVATED / NORMAL, STRUCTURED CAMPAIGN DETECTED / NO CLEAR STRUCTURE, etc.) alongside the full numeric output
Use cases for neuromorphic threat intelligence
Security operations and SOC triage
Security analysts receiving a new indicator of compromise — a domain, an IP, a CVE — need rapid contextual enrichment. Instead of opening six browser tabs, call detect_spiking_anomalies with the target. The tool queries NVD, CISA KEV, Censys, DNS, SSL, and tech stack in parallel and runs the LIF simulation. Anomalous neuron firing patterns surface in seconds, with the network synchrony index telling you whether multiple sources are lighting up together.
Threat campaign attribution
Incident response teams trying to attribute a cluster of events to a specific threat group can use attribute_threat_campaign. Provide keywords (e.g., ["Volt Typhoon", "critical infrastructure", "living off the land"]). The STDP engine correlates temporal patterns across CVEs, KEV entries, OFAC sanctions, OpenSanctions, GitHub repos, and Hacker News. If the spectral radius of the weight matrix exceeds the Tracy-Widom edge, there is statistically significant temporal structure — a coordinated campaign fingerprint rather than coincidence.
Infrastructure attack path analysis
Red teams and security architects mapping attack paths through an environment can use analyze_attack_graph on a target domain. The hypergraph grammar builds an asset topology from Censys, DNS, SSL, WHOIS, tech stack, and IP geo data, then derives all viable attack paths with associated probabilities. The betweenness centrality output identifies which asset, if compromised, opens the most subsequent paths — exactly the prioritisation information needed for remediation planning.
Vulnerability risk quantification
Risk managers deciding which CVEs to patch first can use simulate_vulnerability_propagation to model how a specific vulnerability class propagates through a Barabasi-Albert network calibrated to real CVE severity data. The R0 value answers the key question directly: is this vulnerability in an epidemic regime or will it die out naturally? Hub infection order tells you which high-degree nodes — the most connected assets — fall first.
AI agent-driven security automation
Security teams building LLM-based agents for continuous monitoring can register this MCP server as a tool provider. Claude or any MCP-compatible agent can then call forecast_exploit_emergence on a threat domain to get detection limits, ROC curves, and sample complexity estimates — letting the agent autonomously decide whether current monitoring is sufficient or recommend a budget increase.
Detection engineering and capability assessment
Detection engineers evaluating whether their sensor coverage is adequate for a specific threat domain can run forecast_exploit_emergence. Le Cam deficiency quantifies the information loss between current and enhanced monitoring. If the detection limit exceeds the signal strength, the tool outputs the exact budget multiplier needed for reliable detection — a mathematical justification for resource requests.
How to connect this MCP server
Step 1: Get your Apify API token
Sign up at apify.com. Your API token is at console.apify.com/account/integrations.
Step 2: Add to your MCP client
Claude Desktop — add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"neuromorphic-threat-intelligence": {
"url": "https://neuromorphic-threat-intelligence-mcp.apify.actor/mcp",
"headers": {
"Authorization": "Bearer YOUR_APIFY_TOKEN"
}
}
}
}
Cursor — add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"neuromorphic-threat-intelligence": {
"url": "https://neuromorphic-threat-intelligence-mcp.apify.actor/mcp",
"headers": {
"Authorization": "Bearer YOUR_APIFY_TOKEN"
}
}
}
}
Step 3: Verify the connection
Ask your AI assistant: "List the available tools from the neuromorphic threat intelligence server." It should respond with all 8 tool names and descriptions.
Step 4: Run your first analysis
Try: "Use detect_spiking_anomalies to analyze threats for the domain apache.org with default parameters." The tool will query 6 data sources in parallel and return the LIF simulation results with a plain-language interpretation.
MCP tools reference
| Tool | Algorithm | Data sources | Best for |
|---|---|---|---|
detect_spiking_anomalies | LIF spiking network + Filippov | NVD, CISA KEV, Censys, DNS, SSL, Tech Stack | Real-time anomaly detection across threat feeds |
attribute_threat_campaign | STDP + Tracy-Widom edge | NVD, CISA KEV, OFAC, OpenSanctions, GitHub, HN | Campaign attribution by temporal causal pattern |
analyze_attack_graph | Hypergraph grammar + Floyd-Warshall | Censys, DNS, SSL, WHOIS, Tech Stack, IP Geo, NVD | Attack path analysis and critical asset identification |
simulate_vulnerability_propagation | SIS contact process + Barabasi-Albert | NVD, CISA KEV, Censys, Tech Stack | Epidemic threshold and hub infection modelling |
evolve_detection_network | NEAT neuroevolution | NVD, CISA KEV, GitHub, Hacker News | Evolving optimal detection network architecture |
compute_attack_surface | Discrete Morse theory + CW complex | Censys, DNS, SSL, WHOIS, Tech Stack, IP Geo | Topological attack surface quantification |
assess_threat_actor_dynamics | Population game ESS + replicator | OFAC, OpenSanctions, NVD, CISA KEV, Bluesky, Wayback | Threat actor strategy evolution forecasting |
forecast_exploit_emergence | Le Cam deficiency distance | NVD, CISA KEV, GitHub, Hacker News, Change Monitor, Bluesky | Detection limits and monitoring adequacy |
Tool parameters
detect_spiking_anomalies
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
target | string | Yes | — | Domain, IP, or keyword to analyze |
simulation_time | number | No | 500 | LIF simulation duration in ms (100–2000) |
spike_threshold | number | No | 1.0 | Neuron spike threshold theta (0.5–2.0); lower = more sensitive |
max_results | number | No | 20 | Max results per data source (5–50) |
attribute_threat_campaign
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
campaign_keywords | string[] | Yes | — | 1–10 keywords identifying the campaign |
tau_plus | number | No | 20 | STDP potentiation time constant in ms (5–100) |
tau_minus | number | No | 20 | STDP depression time constant in ms (5–100) |
max_results | number | No | 15 | Max results per source (5–30) |
analyze_attack_graph
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
target_domain | string | Yes | — | Domain to build attack graph for |
max_derivation_depth | number | No | 10 | Grammar derivation depth (3–20) |
max_results | number | No | 15 | Max results per source (5–30) |
simulate_vulnerability_propagation
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
target | string | Yes | — | Domain or technology for vulnerability context |
network_size | number | No | 500 | Simulated network node count (50–2000) |
infection_rate | number | No | 0.1 | Per-contact infection probability (0.01–0.5); calibrated by CVE severity |
recovery_rate | number | No | 0.05 | Recovery probability per time step (0.01–0.5) |
initial_infected | number | No | 5 | Initially infected nodes (1–50) |
time_steps | number | No | 200 | Simulation time steps (50–500) |
evolve_detection_network
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
threat_domain | string | Yes | — | Threat domain to evolve a detector for |
population_size | number | No | 100 | NEAT population size (20–200) |
generations | number | No | 50 | Evolution generations (10–100) |
target_accuracy | number | No | 0.85 | Target detection accuracy (0.6–0.99) |
max_results | number | No | 15 | Max results per source (5–30) |
compute_attack_surface
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
target_domain | string | Yes | — | Domain for Morse theory attack surface analysis |
max_results | number | No | 15 | Max results per source (5–30) |
assess_threat_actor_dynamics
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
threat_context | string | Yes | — | Context string (e.g., "financial sector", "healthcare") |
time_steps | number | No | 500 | Replicator dynamics time steps (100–2000) |
mutation_rate | number | No | 0.01 | Strategy mutation rate (0.001–0.1) |
max_results | number | No | 15 | Max results per source (5–30) |
forecast_exploit_emergence
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
threat_domain | string | Yes | — | Threat domain for detection limit forecasting |
monitoring_budget | number | No | 100 | Monitoring sample size / sensor count (10–10000) |
baseline_variance | number | No | 1.0 | Baseline noise variance (0.01–10) |
max_results | number | No | 15 | Max results per source (5–30) |
Configuration tips
- Lower
spike_thresholdfor noisier environments — 0.7 increases sensitivity; raise to 1.3 to reduce false positives on well-known domains - Use
tau_plus=tau_minusfor symmetric STDP learning; settau_plus>tau_minusif you want to weight causal (pre-before-post) sequences more heavily - Set
max_derivation_depthto 15+ only if you need exhaustive attack path enumeration; depth 5–8 is sufficient for most assessments and runs faster - Increase
monitoring_budgetinforecast_exploit_emergenceto match your actual sensor count for calibrated Le Cam deficiency estimates - Use specific
threat_contextstrings like "CVE-2024-4577" or "Log4Shell remote code execution" rather than vague terms — the upstream actors rank results by relevance
Output example
detect_spiking_anomalies output for target coldfusion.apache.org:
{
"target": "coldfusion.apache.org",
"parameters": {
"simulation_time": 500,
"spike_threshold": 1.0
},
"network": {
"neurons": [
{ "id": "N_cve", "spikeCount": 47, "firingRate": 0.094, "meanMembrane": 0.8312 },
{ "id": "N_kev", "spikeCount": 31, "firingRate": 0.062, "meanMembrane": 0.6741 },
{ "id": "N_host", "spikeCount": 12, "firingRate": 0.024, "meanMembrane": 0.3108 },
{ "id": "N_dns", "spikeCount": 8, "firingRate": 0.016, "meanMembrane": 0.2894 },
{ "id": "N_ssl", "spikeCount": 15, "firingRate": 0.030, "meanMembrane": 0.3520 },
{ "id": "N_tech", "spikeCount": 22, "firingRate": 0.044, "meanMembrane": 0.5017 }
],
"totalSpikes": 135,
"networkActivity": 0.7842,
"synchronyIndex": 0.6391,
"filippovSwitchingCount": 14
},
"anomalies": [
{ "neuronId": "N_cve", "score": 0.9312, "type": "burst", "timestamp": 1742553600000 },
{ "neuronId": "N_kev", "score": 0.8741, "type": "synchrony", "timestamp": 1742553602441 },
{ "neuronId": "N_tech", "score": 0.7203, "type": "rate_elevation", "timestamp": 1742553605882 },
{ "neuronId": "N_ssl", "score": 0.6891, "type": "burst", "timestamp": 1742553608103 }
],
"burstEvents": [
{ "time": 82, "neuronCount": 4, "severity": 0.8841 },
{ "time": 241, "neuronCount": 3, "severity": 0.7120 },
{ "time": 388, "neuronCount": 5, "severity": 0.9203 }
],
"synapses": [
{ "pre": "N_cve", "post": "N_kev", "weight": 0.3841, "potentiated": true },
{ "pre": "N_kev", "post": "N_tech", "weight": 0.2917, "potentiated": true }
],
"interpretation": "CRITICAL: 4 anomalous neurons detected. Network synchrony 0.6391 indicates correlated threat activity. 3 burst events observed. 14 Filippov switching events at threshold boundaries.",
"dataSources": {
"cve": 20,
"kev": 14,
"hosts": 18,
"dns": 7,
"ssl": 11,
"tech": 15
}
}
Output fields
Every tool response includes an interpretation string (plain-language verdict), a dataSources object (record count per source), and tool-specific fields listed below.
| Field | Tool | Type | Description |
|---|---|---|---|
network.neurons[].spikeCount | spiking | number | Total spikes per neuron during simulation |
network.synchronyIndex | spiking | number | Pairwise neuron synchrony 0–1 |
network.filippovSwitchingCount | spiking | number | Threshold boundary crossing events |
anomalies[].score | spiking | number | Anomaly score 0–1 |
anomalies[].type | spiking | string | burst / synchrony / rate_elevation |
burstEvents[].severity | spiking | number | Burst event severity 0–1 |
stdpResults.spectralRadius | attribution | number | Largest weight matrix eigenvalue |
stdpResults.tracyWidomEdge | attribution | number | Random matrix noise boundary |
stdpResults.learningConverged | attribution | boolean | Whether STDP weights stabilised |
hebbianAssemblies[].coherence | attribution | number | Temporal coherence of threat cluster 0–1 |
attackGraph.compromisedAssets | attack graph | number | Assets reachable via grammar derivation |
attackGraph.attackPaths[].probability | attack graph | number | Path traversal probability |
attackGraph.criticalNodes[].centrality | attack graph | number | Betweenness centrality score |
epidemiology.basicReproductionNumber | propagation | number | R0 — secondary infections per case |
epidemiology.epidemicThreshold | propagation | boolean | Whether R0 exceeds critical threshold |
hubInfectionOrder[].infectedAt | propagation | number | Time step hub node was infected |
performance.detectionAccuracy | NEAT | number | Evolved network detection accuracy |
performance.falsePositiveRate | NEAT | number | FPR of best evolved genome |
bestNetwork.hiddenNodes | NEAT | number | Hidden nodes in best topology |
morseComplex.criticalCells | attack surface | number | Unpaired (irreducible) cell count |
morseComplex.eulerCharacteristic | attack surface | number | Topological invariant χ |
morseComplex.bettiNumbers | attack surface | number[] | Betti numbers [β0, β1, β2] |
ecosystem.strategies[].isESS | dynamics | boolean | Evolutionary stability of strategy |
ecosystem.lyapunovExponent | dynamics | number | Positive = chaotic, negative = stable |
ecosystem.dominantStrategy | dynamics | string | Strategy with highest equilibrium share |
detectionTheory.deficiency | forecast | number | Le Cam deficiency 0 (no loss) to 1 (total loss) |
detectionTheory.detectionLimit | forecast | number | Minimum detectable exploit severity |
detectionTheory.sampleComplexity | forecast | number | Samples needed for 80% detection power |
rocCurve | forecast | array | False positive rate vs. true positive rate pairs |
How much does it cost to run neuromorphic threat analysis?
This MCP server uses pay-per-event pricing — you pay $0.04 per tool call. Each call orchestrates 4–7 upstream actor runs in parallel; platform compute costs are included in the per-call price.
| Scenario | Tool calls | Cost per call | Total cost |
|---|---|---|---|
| Single investigation | 1 | $0.04 | $0.04 |
| Domain audit (all 8 tools) | 8 | $0.04 | $0.32 |
| Daily threat sweep (10 targets) | 10 | $0.04 | $0.40 |
| Weekly SOC workflow (50 queries) | 50 | $0.04 | $2.00 |
| Continuous monitoring (500/month) | 500 | $0.04 | $20.00 |
The Apify Free plan includes $5 of monthly credits — enough for 125 tool calls with no subscription required.
You can set a maximum spending limit per run to control costs. The actor stops when your budget is reached and returns a clear error message rather than silently truncating results.
For comparison, commercial threat intelligence platforms charge $300–1,500/month for multi-source correlation with far fewer algorithmic outputs. Most users of this MCP server spend $2–20/month for richer, more quantitative analysis.
How Neuromorphic Threat Intelligence MCP Server works
Phase 1 — Parallel data collection
Each tool call identifies its required data sources (4–7 depending on the tool) and dispatches all actor calls simultaneously using Promise.all via runActorsParallel. Each upstream call has a 180-second timeout and 256 MB memory allocation. If an upstream actor fails or returns no dataset, the client logs a warning and returns an empty array — the algorithm still runs on whatever data was collected.
Phase 2 — Severity normalisation and indicator assembly
Raw results from heterogeneous sources are normalised into a unified ThreatIndicator format. Severity is extracted from whichever field is present: cvssScore (divided by 10), severity string enum (critical→0.95, high→0.80, medium→0.50, low→0.20), numeric score, or source-type defaults (kev→0.85 because CISA KEV entries are by definition actively exploited). Timestamps are parsed from publishedDate, dateAdded, timestamp, date, createdAt, or created fields in that priority order.
Phase 3 — Algorithm execution
The assembled indicators are passed to the chosen algorithm from scoring.ts. A deterministic seeded PRNG (mulberry32 initialized from a hash of indicator IDs) ensures reproducible results for the same input data. Key implementation details:
- LIF simulation runs at
dt=1.0ms steps for the specifiedsimulation_time. All-to-all synapses with random initial weights 0.1–0.4 and axonal delays 1–5 steps. Filippov switching detection uses a 0.05 boundary tolerance around the threshold. - STDP weight matrix is built from temporal spike pairs within a 500-minute window. The Tracy-Widom edge is computed as
2 * sqrt(n)where n is neuron count, matching the standard result for the largest eigenvalue of a GUE random matrix. - Hypergraph grammar uses context-free production rules seeded from asset types. Floyd-Warshall computes all-pairs reachability in O(n³). Betweenness centrality is approximated by counting how often each node appears in shortest paths.
- Contact process uses Barabasi-Albert preferential attachment to generate the network, then runs discrete-time SIS dynamics with infection rate calibrated by average CVE severity from real NVD data.
- NEAT evolves 6-feature input vectors (severity, type flags for cve/kev/sanction, indicator age, metadata richness) toward a binary threat/noise classification output. Speciation uses a compatibility threshold with weight contributions of 1.0 for disjoint genes and 0.4 for weight differences.
- Discrete Morse assigns a Morse function value to each cell and computes discrete gradient vector fields by pairing cells of adjacent dimensions. Unpaired cells become critical and count toward the attack surface.
- Population game calibrates the payoff matrix entries from sanctions density and vulnerability density signals pulled from the live data. The replicator equation is integrated with Euler method at each time step with mutation injection.
- Le Cam deficiency computes the total variation distance between the likelihood ratio distributions of the two experiments, then derives the power function, minimax risk, and sample complexity from Gaussian approximation.
Phase 4 — Interpretation and structured output
Each algorithm produces both raw numeric fields and a plain-language interpretation string. The interpretation uses threshold-based classification (e.g., anomaly count > 3 → CRITICAL, > 0 → ELEVATED, 0 → NORMAL) to give the AI agent a clear signal it can act on without requiring it to interpret the mathematics.
Tips for best results
-
Use specific targets over broad keywords.
detect_spiking_anomalieswith target"CVE-2024-4577"retrieves highly relevant CVEs and KEV entries. A broad keyword like"web"will return generic results that dilute the severity signal. -
Run the full 8-tool suite for high-value targets. Each tool illuminates a different dimension.
detect_spiking_anomaliescatches correlated multi-source anomalies;analyze_attack_graphmaps the infrastructure;compute_attack_surfacequantifies irreducible exposure. Together they give a complete picture. -
Lower
spike_thresholdfor early warning. The default of 1.0 is calibrated for general use. For continuous monitoring of critical infrastructure, 0.7–0.8 catches weaker signals at the cost of more false positives. -
Combine
evolve_detection_networkwith specific threat domains. The NEAT algorithm trains on indicator data pulled from your query. Use precise domain strings like"ransomware healthcare"rather than"malware"to get a detector specialized for your context. -
Use
forecast_exploit_emergencebefore expanding monitoring. The Le Cam deficiency output tells you quantitatively whether doubling your sensor count improves detection — and by how much. Use thesampleComplexityfield to justify monitoring budget requests. -
Interpret
synchronyIndexalongsideanomaliescount. A high synchrony index (>0.6) with zero anomalies suggests borderline activity where sources are correlated but not yet crossing individual thresholds — a leading indicator worth watching. -
Use
assess_threat_actor_dynamicsquarterly. The evolutionary game output reflects the balance of real sanctions activity and vulnerability data at query time. Running it quarterly tracks shifts in which threat actor strategies are gaining ground in your sector. -
Set spending limits on automated workflows. If you run this MCP server in a Claude agent that loops, add a per-run budget limit to prevent runaway costs from unexpected recursion.
Combine with other Apify actors
| Actor | How to combine |
|---|---|
| Website Contact Scraper | Enrich attack graph nodes with human contact data — pair infrastructure exposure from analyze_attack_graph with responsible disclosure contacts |
| Website Tech Stack Detector | Pre-run tech stack detection to build a richer technology inventory before running simulate_vulnerability_propagation with technology-specific CVEs |
| WHOIS Domain Lookup | Bulk WHOIS lookups for a portfolio of domains before feeding each through detect_spiking_anomalies for organisation-wide threat scanning |
| Website Change Monitor | Feed change events as input to detect_spiking_anomalies — defacement, redirect injection, or new script tags are high-severity indicators |
| Company Deep Research | Layer company intelligence onto threat actor dynamics — use assess_threat_actor_dynamics for the sector, then cross-reference with company-specific exposure |
| B2B Lead Qualifier | For MSSP sales teams: qualify prospects by first running compute_attack_surface on their domain to quantify their exposure before outreach |
| Trustpilot Review Analyzer | Correlate public reputation signals with threat actor dynamics in sectors where brand attacks and cyber attacks co-occur |
Limitations
- Passive data only — all sources are publicly available. The server sends no probes, no packets, and no active scanning requests to target infrastructure. Coverage is limited to what is publicly indexed.
- Upstream actor availability — if NVD, Censys, or another upstream source is temporarily unavailable or returns no results, that source's indicators are omitted. The algorithm runs on whatever data was collected; results may be less precise with incomplete inputs.
- LIF sensitivity to threshold parameter — the default spike threshold of 1.0 is a starting point. Domains with inherently high CVE counts (e.g., Linux kernel) will generate more neuron firing regardless of threat state; tune the threshold upward for well-known high-volume targets.
- STDP does not imply causation — temporal correlation between threat sources is a signal, not proof of a coordinated campaign. The Tracy-Widom edge reduces false positives significantly but cannot eliminate them.
- Barabasi-Albert is an approximation — real infrastructure topology differs from the preferential attachment model. The contact process simulation gives directionally correct R0 estimates but not precise infection counts for specific networks.
- NEAT accuracy depends on training data volume — with fewer than 10 threat indicators, NEAT fitness evaluation is noisy and the evolved network may not generalise. Use
max_resultsof 20+ for meaningful neuroevolution. - Discrete Morse attack surface uses sampled connections — edges between infrastructure cells are sampled at 35% probability when not directly inferrable from source data. Results are an approximation of the true topology.
- Population game ESS is coNP-complete in general — the implementation computes approximate ESS using replicator dynamics convergence; pure Nash equilibria may exist that are not detected.
- No persistent state between calls — each tool call starts fresh. There is no memory of previous analysis; build your own correlation layer if you need to track change over time.
- Rate limits on upstream actors — running all 8 tools simultaneously against the same target will fire 40+ upstream actor calls. Spread high-volume workloads across time or contact Apify for enterprise quotas.
Integrations
- Zapier — trigger
detect_spiking_anomalieson a schedule and push CRITICAL results to a Slack channel or PagerDuty incident - Make — build a multi-step scenario that runs
analyze_attack_graphon new domains from a Google Sheet and writes results back to a second sheet - Google Sheets — export campaign attribution results from
attribute_threat_campaigninto a spreadsheet for manual review and tagging - Apify API — call the MCP server endpoint directly from SIEM or SOAR workflows via HTTP POST to
/mcp - Webhooks — fire a webhook when a run exceeds a spending threshold, routing alerts to your incident management tool
- LangChain / LlamaIndex — register this MCP server as a tool provider in a LangChain agent for autonomous threat investigation workflows
Troubleshooting
Empty or minimal results despite a known-vulnerable target — Most tools fetch up to max_results items per source with a default of 15–20. If NVD returns few CVEs for the query string, try more specific CVE identifiers or technology names (e.g., "Apache Log4j" rather than "java"). Also check that your Apify token has sufficient credits — a budget limit reached on upstream actors will silently return empty arrays.
interpretation says NORMAL for a known high-risk domain — The LIF spike threshold may be too high for a domain with many CVEs. Lower spike_threshold to 0.7 and re-run. Alternatively, use attribute_threat_campaign with specific CVE keywords to check whether the issue is in data retrieval or in the anomaly detection threshold.
STDP spectral radius always below Tracy-Widom edge — This is the expected result when threat indicators are sparse or temporally spread over months. The Tracy-Widom edge scales with the number of neurons. With fewer than 4 distinct indicator types, the test has low statistical power. Increase max_results to collect more temporal data points.
NEAT evolution not reaching target_accuracy — Accuracy is computed against severity-labelled training patterns extracted from the live data. If the data is heavily imbalanced (all high-severity or all low-severity), the classifier may not have enough negative examples to train well. Use a more specific threat_domain string to get a balanced mix, or lower target_accuracy to 0.75.
Contact process simulation shows epidemic for benign domains — The infection rate is calibrated by the average CVSS score of CVEs matching the query. A query that matches many critical CVEs will raise the calibrated rate above the epidemic threshold even for a secure target. Use a more specific query that reflects your actual target technology and version.
Run timeout error — Each tool call has a 180-second budget per upstream actor. When 6–7 actors run in parallel, the wall time is bounded by the slowest single actor. If timeouts occur, reduce max_results to 10 and retry. Persistent timeouts may indicate upstream service degradation.
Responsible use
- This MCP server only accesses publicly available data from vulnerability databases, certificate transparency logs, DNS records, internet scan data, and public social platforms.
- Do not use this server to conduct reconnaissance on systems you are not authorised to assess.
- Respect the terms of service of upstream data sources including NVD, CISA, Censys, and GitHub.
- Attack graph and vulnerability propagation outputs are for defensive analysis only. Do not use them to plan unauthorised access.
- For guidance on web scraping and data access legality, see Apify's guide.
FAQ
What does neuromorphic threat intelligence mean in practice? Traditional threat intelligence tools apply rule-based correlation: if CVE score > 7 and host is internet-facing, alert. Neuromorphic approaches map threat sources to neurons, use spiking dynamics to detect correlated activity that rules miss, and apply biologically-inspired learning (STDP) to find temporal causal patterns. The output is quantitative — synchrony indices, spectral radii, Euler characteristics — not just binary flags.
How is this different from a SIEM or threat intelligence platform like Recorded Future or Mandiant? Commercial platforms like Recorded Future or Mandiant Advantage focus on curated analyst reports, indicator enrichment, and integration with ticketing systems. This MCP server applies mathematical frameworks to open-source data and returns raw quantitative outputs. It is a reasoning layer for AI agents, not a dashboard. The two are complementary: use this server for algorithmic analysis, feed results into your existing SIEM for case management.
How many data sources does each tool call query?
Between 4 and 7, depending on the tool. analyze_attack_graph and compute_attack_surface query 6–7 sources. simulate_vulnerability_propagation queries 4. All calls are parallel, so wall time is bounded by the slowest single source.
Is this active scanning or passive intelligence gathering? Entirely passive. The server queries publicly available vulnerability databases, certificate transparency logs, DNS records, published internet scan data (Censys), and public social/code platforms. No packets are sent to target infrastructure.
How accurate is the STDP campaign attribution? Accuracy depends on data volume and temporal span. The Tracy-Widom edge test is a rigorous statistical test from random matrix theory: a spectral radius above the edge means the correlation structure is statistically unlikely to be random at a significance level derived from the GUE distribution. It is not a probability of attribution — it is evidence of temporal structure.
Can I schedule this MCP server to run automated threat sweeps? Yes. You can call the MCP endpoint programmatically from any HTTP client on a schedule. Use Apify's built-in scheduling to trigger actor runs that in turn call the MCP server, or integrate via the Apify API in a cron job or workflow tool.
What threat actor strategies are modelled in assess_threat_actor_dynamics?
Five strategies: Nation-State APT, Cybercrime, Hacktivism, Insider Threat, and Supply Chain. The payoff matrix entries are calibrated from the volume of OFAC/OpenSanctions entries and CVE/KEV density returned for the threat_context query, so the same strategies are re-weighted for different sector contexts.
What does the Le Cam deficiency number mean for my monitoring?
A deficiency close to 0 means your current monitoring is nearly as informative as the enhanced monitoring configuration (double budget). A deficiency close to 1 means you are losing almost all detection information. The sampleComplexity field gives the concrete sample count needed for 80% statistical power — a number you can use directly in a budget justification.
Can I use this with a local AI assistant that supports MCP?
Yes. Any MCP-compatible client can connect. Add the server URL and your Bearer token to your client's MCP configuration. The endpoint uses the standard streamable HTTP MCP transport at /mcp.
How long does a typical tool call take? Between 30 and 120 seconds, depending on how many upstream actors are needed and their current response times. Tools querying 6–7 sources take longer than those querying 4. Set your MCP client timeout to at least 180 seconds.
Is it legal to analyze a domain I don't own using this server? The server only uses publicly available data — the same data accessible to anyone via NVD, Censys, DNS lookup, or the CISA KEV website. There is no legal restriction on analyzing public vulnerability and certificate data about a domain. For specific legal advice about your jurisdiction and use case, consult a lawyer.
What happens when an upstream actor returns no results?
The client logs a warning and returns an empty array for that source. The algorithm runs on the remaining indicators. Results may be less precise but the tool will not fail — you will see dataSources.{source}: 0 in the output, which tells you the gap.
Help us improve
If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:
- Go to Account Settings > Privacy
- Enable Share runs with public Actor creators
This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom solutions or enterprise integrations, reach out through the Apify platform.
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
Bulk Email Verifier
Verify email deliverability at scale. MX record validation, SMTP mailbox checks, disposable and role-based detection, catch-all flagging, and confidence scoring. No external API costs.
GitHub Repository Search
Search GitHub repositories by keyword, language, topic, stars, forks. Sort by stars, forks, or recently updated. Returns metadata, topics, license, owner info, URLs. Free API, optional token for higher limits.
Website Content to Markdown
Convert any website to clean Markdown for RAG pipelines, LLM training, and AI apps. Crawls pages, strips boilerplate, preserves headings, tables, and code blocks. GFM support.
Website Tech Stack Detector
Detect 100+ web technologies on any website. Identifies CMS, frameworks, analytics, marketing tools, chat widgets, CDNs, payment systems, hosting, and more. Batch-analyze multiple sites with version detection and confidence scoring.
Ready to try Neuromorphic Threat Intelligence MCP Server?
Start for free on Apify. No credit card required.
Open on Apify Store