Critical Infrastructure Interdependency MCP Server
Critical infrastructure cascade analysis for AI agents — this MCP server maps cross-sector interdependencies across energy, telecom, water, cyber, transport, and financial networks and simulates how failures propagate through them. Connect any MCP-compatible AI agent to 8 specialized tools that run 6 mathematical algorithms on real data from 14 live sources.
Maintenance Pulse
90/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| build-network | 6-layer multiplex hypergraph construction | $0.08 |
| cascade-failure | BTW sand-pile self-organized criticality | $0.10 |
| cyber-physical | Attack graph x dependency graph product | $0.10 |
| geographic-correlation | Voronoi tessellation with multi-hazard overlay | $0.06 |
| recovery-timeline | Critical path method on dependency DAG | $0.06 |
| critical-nodes | Supra-Laplacian algebraic connectivity | $0.08 |
| disaster-impact | Natural disaster infrastructure impact model | $0.08 |
| resilience-assessment | Full infrastructure resilience report | $0.15 |
Example: 100 events = $8.00 · 1,000 events = $80.00
Connect to your AI agent
Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.
https://ryanclinton--critical-infrastructure-interdependency-mcp.apify.actor/mcp{
"mcpServers": {
"critical-infrastructure-interdependency-mcp": {
"url": "https://ryanclinton--critical-infrastructure-interdependency-mcp.apify.actor/mcp"
}
}
}Documentation
Critical infrastructure cascade analysis for AI agents — this MCP server maps cross-sector interdependencies across energy, telecom, water, cyber, transport, and financial networks and simulates how failures propagate through them. Connect any MCP-compatible AI agent to 8 specialized tools that run 6 mathematical algorithms on real data from 14 live sources.
The server builds a 6-layer multiplex hypergraph from OpenStreetMap infrastructure data, DNS records, vulnerability databases, and seismic/weather/disaster feeds, then applies BTW sand-pile cascade simulation, Voronoi geographic co-location risk, Critical Path Method recovery estimation, and supra-Laplacian algebraic connectivity indexing to produce actionable resilience intelligence. No setup required beyond an API token — connect via the Model Context Protocol /mcp endpoint.
What data can you access?
| Data Point | Source | Coverage |
|---|---|---|
| 📍 Physical infrastructure nodes | OSM POI Search | Global OpenStreetMap (airports, substations, water plants, data centers) |
| 🌐 DNS records and IP topology | DNS Record Lookup | Any domain — A, MX, NS, CNAME records |
| 🛰️ Internet host exposure | Censys Search | Full IPv4 scan data including open ports and banners |
| 🔒 SSL certificate chains | crt.sh Search | Certificate Transparency logs for any domain |
| 💻 Website technology fingerprints | Website Tech Stack Detector | CMS, frameworks, CDN, hosting for any URL |
| 🌍 IP geolocation | IP Geolocation Lookup | Global IP-to-coordinate resolution for network topology |
| ⚠️ CVE vulnerabilities | NVD CVE Search | Full NVD database with CVSS scores |
| 🚨 Actively exploited vulnerabilities | CISA KEV Catalog | Known Exploited Vulnerabilities with remediation deadlines |
| 🌋 Seismic events | USGS Earthquake Search | Global earthquake catalog with magnitude and coordinates |
| 🌩️ Weather alerts | NOAA Weather Alerts | US National Weather Service active warnings |
| 🏚️ US disaster declarations | FEMA Disaster Search | Presidential disaster declarations by state |
| 🌊 Flood warnings | UK Flood Monitoring | England and Wales Environment Agency flood data |
| 🌏 Global disaster alerts | GDACS | UN-affiliated real-time global disaster notifications |
| 📌 Geocoding | Nominatim | OpenStreetMap geocoder — any city, state, or country |
Why use Critical Infrastructure Interdependency MCP Server?
Infrastructure resilience analysis has traditionally required specialized GIS software, proprietary vulnerability databases, and custom simulation code that takes weeks to build. Consulting firms charge $50,000–$200,000 for a single regional infrastructure assessment. Emergency management agencies run tabletop exercises that miss quantitative cascade dynamics entirely.
This MCP server automates the entire pipeline — from raw infrastructure discovery through cascade simulation to recovery timeline estimation — in a single AI agent conversation. An analyst can model the impact of a Category 4 hurricane on Texas's power grid in one tool call, identify the three substations whose failure would cascade to financial services, and estimate restoration time using the Critical Path Method, all without leaving their AI assistant.
- Scheduling — run weekly resilience snapshots to track how infrastructure changes affect algebraic connectivity over time
- API access — trigger assessments programmatically from Python, JavaScript, or any HTTP client integrating with Apify
- Proxy rotation — underlying data collection actors use Apify's built-in proxy infrastructure for reliable large-scale discovery
- Monitoring — get Slack or email alerts when resilience index drops below threshold after infrastructure changes
- Integrations — connect results to Zapier, Make, Google Sheets, or push structured findings directly into GRC platforms via webhooks
Features
- 6-layer multiplex hypergraph construction — builds an InfraNode/InfraEdge network across energy, telecom, water, cyber, transport, and financial sectors from real OSM POI data, DNS records, and geocoded coordinates
- Hardcoded sector dependency weights — inter-layer edges encode empirical coupling strengths: telecom-to-energy at 0.9, financial-to-cyber at 0.9, cyber-to-telecom at 0.95, water-to-energy at 0.85
- BTW sand-pile cascade simulation — implements the Bak-Tang-Wiesenfeld self-organized criticality model where each node topples when load z_i exceeds threshold z_c = 1.0, redistributing load to neighbors as z_neighbor += z_i / degree(i), tracking avalanche size distributions and power-law exponent
- SOC detection — measures criticality exponent and flags systems exhibiting self-organized critical behavior where small perturbations can trigger large-scale collapse
- Graph product cyber-physical coupling — constructs the tensor product of the cyber attack graph (AG) and physical dependency graph (PG) to identify exploit chains that cascade from CVE vulnerabilities into physical infrastructure damage
- Voronoi geographic co-location risk — tessellates the region into Voronoi cells, overlays earthquake/flood/weather/disaster hazard data, and computes per-cell risk as: sectors_present × hazard_probability × (1 − Shannon_entropy_diversity)
- CPM recovery timeline estimation — runs forward/backward pass on a dependency DAG respecting sector restoration order (energy → water → telecom → transport → cyber → financial) with default recovery hours (energy: 72h, water: 48h, telecom: 24h), producing critical path and zero-slack bottleneck nodes
- Supra-Laplacian algebraic connectivity — computes lambda_2 (Fiedler value) of the supra-Laplacian matrix across all 6 layers; classifies resilience as CRITICAL / LOW / MODERATE / HIGH / VERY_HIGH based on the Fiedler vector
- Multi-metric critical node identification — scores nodes by composite: cascade impact 40% + algebraic connectivity drop on removal 30% + degree centrality 30%
- Haversine distance calculations — uses exact spherical geometry for geographic proximity edges and impact zone determination (radius in km from region center)
- Parallel actor orchestration — runs up to 14 underlying actors concurrently via
runActorsParallel, reducing total wall time from 20+ minutes to 3–5 minutes per tool call - Spending limit enforcement — every tool call checks
Actor.charge()event limit before running, halting cleanly if the per-run budget is reached - Keyword-based sector classification — classifies OSM POI nodes into sectors using 60+ infrastructure keywords across 6 domains (e.g., "substation", "data center", "aqueduct", "clearing house")
Use cases for critical infrastructure interdependency analysis
Emergency management and disaster preparedness
State and federal emergency managers use model_natural_disaster_impact to simulate how an earthquake, flood, or hurricane would disable regional infrastructure before the event occurs. The tool combines real USGS seismic data and NOAA weather alerts with the OSM infrastructure network, runs BTW cascade simulation from directly impacted nodes, and estimates total restoration time via CPM — producing a structured briefing that quantifies percentage of network affected and identifies bottleneck restoration tasks.
Critical infrastructure protection and investment prioritization
Facility security officers and infrastructure investment analysts use identify_critical_nodes to rank every node in a region by its composite criticality score. Rather than protecting everything equally, the multi-metric scoring (cascade impact, algebraic connectivity drop, degree centrality) identifies the 5–10 nodes whose failure would most destabilize the entire cross-sector network — enabling targeted hardening investments.
Cyber-physical threat modeling for ICS/OT security
ICS security teams and APT red teams use assess_cyber_physical_attack to map how vulnerabilities in SCADA systems, ICS software, or internet-exposed control systems translate into physical infrastructure damage. The graph product attack surface model traces maximum-weight exploit chains from NVD CVEs and CISA KEV entries through Censys-discovered exposed hosts into physical dependency nodes, identifying the most exploitable cyber entry point and most vulnerable physical asset.
Infrastructure resilience auditing and regulatory compliance
Utilities, telecoms, and financial institutions subject to NERC CIP, TSA Pipeline Security directives, or DORA requirements use generate_resilience_assessment to produce a full quantitative resilience report. The assessment combines all 6 algorithms — hypergraph construction, cascade simulation, cyber-physical coupling, geographic co-location risk, CPM recovery estimation, and supra-Laplacian algebraic connectivity — into a single structured output with findings, recommendations, and a System Resilience Index.
Insurance underwriting and catastrophe risk modeling
Catastrophe risk modelers at insurance and reinsurance firms use compute_geographic_correlation to identify geographic areas where multiple infrastructure sectors cluster within the same hazard zone. The Voronoi tessellation with Shannon entropy diversity index surfaces co-location risks that traditional property cat models miss — a data center next to a substation in a FEMA flood zone has a fundamentally different risk profile than isolated assets.
Post-incident recovery planning
Incident commanders and infrastructure restoration teams use estimate_recovery_timeline after a failure event to determine the critical path back to full operation. The CPM forward/backward pass respects physical dependencies (you cannot restore financial systems before power and telecom are restored) and identifies zero-slack bottleneck nodes that will determine total restoration time if delayed.
How to use this MCP server for infrastructure analysis
-
Connect the server to your AI agent — add the MCP endpoint URL to your client configuration. For Claude Desktop, paste the JSON block from the "How to connect" section below into your
claude_desktop_config.json. No API key required in the config — the server handles authentication. -
Start with a region — in your agent conversation, ask it to call
build_infrastructure_networkwith a city, state, or country name such as "Houston, Texas" or "Netherlands". Optionally provide known infrastructure domain names to enrich the cyber layer. -
Run the analysis tools — ask your agent to call
simulate_cascade_failure,compute_geographic_correlation,identify_critical_nodes, ormodel_natural_disaster_impactfor the same region. Each tool builds the network fresh or accepts output from a priorbuild_infrastructure_networkcall. -
Review structured results — each tool returns a JSON report with network statistics, ranked findings, sector breakdowns, and geographic coordinates. Ask your agent to summarize findings, generate a risk briefing, or export the structured data.
How to connect this MCP server
Claude Desktop
{
"mcpServers": {
"critical-infrastructure-interdependency": {
"url": "https://critical-infrastructure-interdependency-mcp.apify.actor/mcp"
}
}
}
Cursor
{
"mcpServers": {
"critical-infrastructure-interdependency": {
"url": "https://critical-infrastructure-interdependency-mcp.apify.actor/mcp"
}
}
}
Windsurf / Codeium
{
"mcpServers": {
"critical-infrastructure-interdependency": {
"url": "https://critical-infrastructure-interdependency-mcp.apify.actor/mcp"
}
}
}
MCP tools reference
| Tool | Description | Typical cost |
|---|---|---|
build_infrastructure_network | Build 6-layer multiplex hypergraph from OSM POI, DNS, tech stack, geocoding, and IP geolocation data. Returns supra-adjacency matrix and per-layer statistics. | $150–200 |
simulate_cascade_failure | Run BTW sand-pile cascade simulation with configurable trigger nodes or sectors. Returns avalanche size distribution, criticality exponent, and SOC flag. | $150–250 |
assess_cyber_physical_attack | Model cyber-to-physical attack surface via graph product of CVE/KEV/Censys attack graph and physical dependency graph. Returns exploit chains and most vulnerable assets. | $200–350 |
compute_geographic_correlation | Voronoi tessellation with multi-hazard overlay (earthquake, flood, weather, disaster). Returns high-risk cells ranked by correlation risk score. | $150–300 |
estimate_recovery_timeline | CPM forward/backward pass on dependency DAG. Returns total recovery hours, critical path nodes, sector restoration order, and zero-slack bottlenecks. | $150–200 |
identify_critical_nodes | Rank nodes by composite score: cascade impact (40%) + algebraic connectivity drop (30%) + degree centrality (30%). Returns top-N with explanatory reasons. | $150–250 |
model_natural_disaster_impact | Full pipeline: hazard data + infrastructure network + cascade simulation + CPM recovery for a specified disaster type and impact radius. | $200–400 |
generate_resilience_assessment | Comprehensive audit running all 6 algorithms across all 14 data sources. Returns System Resilience Index, findings, and prioritized recommendations. | $300–400 |
Tool input parameters
| Tool | Parameter | Type | Required | Default | Description |
|---|---|---|---|---|---|
| All tools | region | string | Yes | — | City, state, or country name (e.g., "Houston, Texas") |
build_infrastructure_network | domains | string[] | No | — | Infrastructure domain names for cyber layer enrichment |
build_infrastructure_network | sectors | string[] | No | all 6 | Focus sectors: energy, telecom, water, cyber, transport, financial |
build_infrastructure_network | radius_km | number | No | 100 | Search radius from region center in kilometers |
simulate_cascade_failure | trigger_nodes | string[] | No | — | Specific node IDs to trigger (from build_infrastructure_network) |
simulate_cascade_failure | trigger_sector | string | No | — | Trigger the 3 highest-criticality nodes in this sector |
simulate_cascade_failure | max_cascade_steps | number | No | 50 | Maximum cascade propagation iterations |
assess_cyber_physical_attack | domains | string[] | Yes | — | Target domains or IPs to scan for cyber attack surface |
assess_cyber_physical_attack | cve_keywords | string | No | "SCADA ICS critical infrastructure" | CVE search keywords for vulnerability query |
compute_geographic_correlation | include_earthquakes | boolean | No | true | Include USGS earthquake overlay |
compute_geographic_correlation | include_weather | boolean | No | true | Include NOAA weather alert overlay |
compute_geographic_correlation | include_floods | boolean | No | true | Include FEMA/UK Flood/GDACS overlay |
compute_geographic_correlation | days_lookback | number | No | 30 | Days of historical hazard data to include |
estimate_recovery_timeline | failed_node_ids | string[] | No | all nodes | Specific failed node IDs from cascade output |
estimate_recovery_timeline | failed_sectors | string[] | No | — | Assume all nodes in these sectors have failed |
estimate_recovery_timeline | scenario | string | No | — | Scenario description for context in output |
identify_critical_nodes | top_n | number | No | 10 | Number of top critical nodes to return |
model_natural_disaster_impact | disaster_type | string | No | "all" | earthquake, flood, hurricane, wildfire, or "all" |
model_natural_disaster_impact | impact_radius_km | number | No | 100 | Radius of disaster impact zone in km |
generate_resilience_assessment | domains | string[] | No | — | Infrastructure domains for cyber layer |
generate_resilience_assessment | cve_keywords | string | No | "SCADA ICS critical infrastructure" | CVE search keywords |
Output example
The following is representative output from model_natural_disaster_impact for a major urban region:
{
"region": "Houston, Texas",
"disasterType": "earthquake",
"impactZone": {
"center": { "lat": 29.7604, "lon": -95.3698 },
"radiusKm": 100,
"directlyImpactedNodes": 34,
"cascadeAdditionalNodes": 18,
"totalAffectedNodes": 52,
"totalNetworkNodes": 67,
"percentageAffected": 78
},
"cascadeAnalysis": {
"totalAvalanches": 12,
"maxAvalancheSize": 19,
"isSelfOrganizedCritical": true,
"affectedSectors": ["energy", "water", "telecom", "transport", "financial"]
},
"recoveryEstimate": {
"totalRecoveryHours": 218.4,
"totalRecoveryDays": 9.1,
"criticalPathLength": 8,
"sectorRecoveryOrder": [
{ "sector": "energy", "startHour": 0, "endHour": 68.2 },
{ "sector": "water", "startHour": 0, "endHour": 51.7 },
{ "sector": "telecom", "startHour": 68.2, "endHour": 89.5 },
{ "sector": "transport", "startHour": 68.2, "endHour": 97.1 },
{ "sector": "cyber", "startHour": 89.5, "endHour": 101.8 },
{ "sector": "financial", "startHour": 101.8,"endHour": 110.3 }
],
"bottleneckNodes": ["node-4", "node-11", "node-23"]
},
"geographicRisk": {
"highRiskCells": 7,
"maxCorrelationRisk": 3.84
},
"hazardData": {
"earthquakes": 14,
"weatherAlerts": 6,
"femaDisasters": 9,
"gdacsAlerts": 3,
"floodWarnings": 5
}
}
Representative output from identify_critical_nodes:
{
"region": "Rotterdam, Netherlands",
"networkSize": { "nodes": 71, "edges": 184 },
"resilienceIndex": {
"algebraicConnectivity": 0.12,
"resilience": "LOW",
"networkPartitionRisk": 0.73
},
"criticalNodes": [
{
"nodeId": "node-3",
"nodeName": "Pernis Oil Refinery Power Substation",
"sector": "energy",
"criticalityScore": 0.891,
"reasons": ["Triggers avalanche size 23 in BTW cascade", "Removal reduces lambda_2 by 0.047", "Degree centrality 0.31 across all layers"]
},
{
"nodeId": "node-17",
"nodeName": "Rotterdam Port Fiber Exchange",
"sector": "telecom",
"criticalityScore": 0.764,
"reasons": ["Cascade impact to 4 sectors", "Inter-layer hub: telecom/cyber/financial", "Zero slack in CPM recovery path"]
}
],
"sectorBreakdown": [
{ "sector": "energy", "criticalCount": 4, "avgCriticality": 0.781 },
{ "sector": "telecom", "criticalCount": 3, "avgCriticality": 0.694 },
{ "sector": "water", "criticalCount": 2, "avgCriticality": 0.612 },
{ "sector": "cyber", "criticalCount": 1, "avgCriticality": 0.587 },
{ "sector": "transport", "criticalCount": 2, "avgCriticality": 0.541 },
{ "sector": "financial", "criticalCount": 1, "avgCriticality": 0.498 }
]
}
Output fields
| Field | Type | Description |
|---|---|---|
region | string | Input region name |
network.totalNodes | number | Total infrastructure nodes across all 6 layers |
network.totalEdges | number | Total intra-layer + inter-layer edges |
network.interLayerEdges | number | Cross-sector dependency edges in supra-adjacency matrix |
network.layers | object | Per-sector node count, edge count, and density |
nodes[].id | string | Unique node identifier (e.g., "node-14") |
nodes[].name | string | Infrastructure name from OSM or geocoder |
nodes[].sector | string | Sector: energy / telecom / water / cyber / transport / financial |
nodes[].criticality | number | Node criticality 0–1 from composite scoring |
nodes[].lat | number | Latitude coordinate |
nodes[].lon | number | Longitude coordinate |
simulation.isSelfOrganizedCritical | boolean | True if avalanche sizes follow power law |
simulation.criticalityExponent | number | Power-law exponent of avalanche size distribution |
simulation.maxAvalancheSize | number | Largest cascade observed across all triggers |
avalanches[].affectedSectors | string[] | Sectors reached by this cascade event |
attackSurface.productGraphNodes | number | Nodes in cyber × physical product graph |
attackSurface.attackSurfaceArea | number | Aggregate attack surface metric |
criticalPaths[].exploitChain | string[] | Ordered CVE/host IDs forming the attack path |
criticalPaths[].physicalCascade | string[] | Physical node IDs affected downstream |
criticalPaths[].combinedRisk | number | Composite cyber + physical risk score |
geoCorrelation.highRiskCells | number | Voronoi cells with correlation risk above threshold 1.0 |
geoCorrelation.maxCorrelationRisk | number | Peak risk value across all cells |
recovery.totalRecoveryHours | number | CPM-estimated total restoration time |
recovery.criticalPath | string[] | Zero-slack node IDs determining total recovery time |
recovery.bottleneckNodes | string[] | Nodes that extend total recovery if delayed |
recovery.sectorRecoveryOrder | object[] | Start/end hours per sector in restoration sequence |
resilienceIndex.algebraicConnectivity | number | Lambda_2 Fiedler value of supra-Laplacian |
resilienceIndex.resilience | string | CRITICAL / LOW / MODERATE / HIGH / VERY_HIGH |
resilienceIndex.networkPartitionRisk | number | Probability-like score of network splitting under stress |
findings | string[] | Auto-generated warnings and recommendations from combined analysis |
How much does critical infrastructure analysis cost?
This MCP server uses pay-per-event pricing — you pay per tool call. Platform compute costs are included.
| Scenario | Tool | Cost per call | Typical monthly |
|---|---|---|---|
| Quick regional snapshot | identify_critical_nodes | ~$0.04 | $0.04 |
| Disaster impact simulation | model_natural_disaster_impact | ~$0.04 | $0.04 |
| Full resilience audit | generate_resilience_assessment | ~$0.04 | $0.04 |
| Weekly monitoring (1 region) | Any single tool × 4 | ~$0.04 | $0.16 |
| Multi-region program (10 regions) | generate_resilience_assessment × 10 | ~$0.04 | $0.40 |
Each tool call charges a single platform event. The underlying actors (OSM, USGS, NVD, etc.) consume Apify compute credits during data collection — the $0.04 event fee covers the MCP server's orchestration layer, while underlying actor runs draw from your platform credit balance.
You can set a maximum spending limit per run to control costs. The server stops cleanly when your budget is reached. The Apify Free plan includes $5 of monthly platform credits — enough for dozens of analysis tool calls at no cost.
Critical infrastructure analysis using the API
Python
from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("ryanclinton/critical-infrastructure-interdependency-mcp").call(run_input={})
print(f"MCP server running at: https://critical-infrastructure-interdependency-mcp.apify.actor/mcp")
print(f"Actor run ID: {run['id']}")
print(f"Status: {run['status']}")
JavaScript
import { ApifyClient } from "apify-client";
const client = new ApifyClient({ token: "YOUR_API_TOKEN" });
const run = await client.actor("ryanclinton/critical-infrastructure-interdependency-mcp").call({});
console.log(`MCP server running at: https://critical-infrastructure-interdependency-mcp.apify.actor/mcp`);
console.log(`Actor run ID: ${run.id}`);
console.log(`Status: ${run.status}`);
cURL
# Start the MCP server actor run
curl -X POST "https://api.apify.com/v2/acts/ryanclinton~critical-infrastructure-interdependency-mcp/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{}'
# The MCP endpoint becomes available at:
# https://critical-infrastructure-interdependency-mcp.apify.actor/mcp
# Use this URL in your MCP client configuration
Direct MCP tool call via HTTP
curl -X POST "https://critical-infrastructure-interdependency-mcp.apify.actor/mcp" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "identify_critical_nodes",
"arguments": {
"region": "Houston, Texas",
"top_n": 10
}
}
}'
How this MCP server works
Phase 1: Infrastructure discovery and multiplex hypergraph construction
The buildMultiplexHypergraph function ingests data from up to 5 sources per tool call — OSM POI, Nominatim geocoder, DNS lookups, tech stack fingerprints, and IP geolocation — and classifies each node into one of 6 sectors using a keyword dictionary of 60+ terms (e.g., "substation" → energy, "data center" → cyber, "aqueduct" → water). Intra-layer edges connect nodes within the same sector when haversine distance is below a proximity threshold. Inter-layer edges encode 12 hardcoded sector dependency relationships with empirical coupling weights derived from infrastructure systems literature (e.g., telecom-to-energy at 0.9, financial-to-cyber at 0.9). The result is a MultiplexHypergraph with a full supra-adjacency matrix for downstream algorithm inputs.
Phase 2: Cascade simulation via BTW sand-pile model
The simulateCascadeFailure function implements the Bak-Tang-Wiesenfeld self-organized criticality model. Each InfraNode carries a load value (initialized 0.3–0.8) and toppling threshold z_c = 1.0. When a trigger node or sector fails, load is redistributed to neighbors as z_neighbor += z_i / degree(i). This continues iteratively for up to max_cascade_steps steps. The function tracks avalanche events, fits a power-law distribution to the avalanche size sequence, and flags systems where the criticality exponent indicates genuine self-organized critical behavior — meaning the infrastructure has naturally evolved to a state where large-scale collapse is always possible.
Phase 3: Cyber-physical coupling via graph product
The computeCyberPhysicalCoupling function builds two graphs — the cyber attack graph (AG) from NVD CVEs, CISA KEV entries, Censys-discovered hosts, and SSL certificate data, and the physical dependency graph (PG) from OSM infrastructure nodes. It constructs their tensor product AG × PG, where each product node (a, p) represents exploiting vulnerability a to compromise physical asset p. Maximum-weight paths through this product graph represent exploit chains with the highest probability of cascading from a cyber intrusion into physical infrastructure damage. The output identifies mostExploitableCyber (CVE entry point) and mostVulnerablePhysical (physical asset most reachable from cyber attack surface).
Phase 4: Geographic co-location risk via Voronoi tessellation
The computeGeographicCorrelation function partitions the region into Voronoi cells, assigning each geographic area to its nearest infrastructure node as a cell center. For each cell, it overlays hazard data from USGS (seismic), NOAA (weather), FEMA/GDACS (disasters), and UK Flood Monitoring. Cell risk is computed as: sectors_present × hazard_probability × (1 − diversity_index), where diversity_index = Shannon_entropy(sector_counts) / log(n_sectors). A cell with 4 co-located sectors in a high-hazard zone but low sector diversity scores higher than a well-diversified cluster in the same hazard zone — capturing both geographic concentration risk and the absence of sector redundancy.
Phase 5: CPM recovery timeline and supra-Laplacian resilience indexing
estimateRecoveryTimeline builds a dependency-ordered DAG where sector restoration must follow energy → water → telecom → transport → cyber → financial. It runs standard CPM forward pass (ES(j) = max(ES(i) + duration(i))) and backward pass (LF(i) = min(LF(j) − duration(j))), identifying zero-slack nodes that form the critical path. Separately, computeResilienceIndex computes the supra-Laplacian matrix of the full multiplex network and approximates lambda_2 (algebraic connectivity / Fiedler value). A lambda_2 near zero indicates the network is close to disconnection; the function also performs single-node removal sensitivity analysis to identify which node's removal causes the steepest drop in connectivity.
Tips for best results
-
Start with
build_infrastructure_networkbefore other tools. The network construction step costs the same as running individual tools but caches geography and sector classification, making subsequent calls faster and more consistent. -
Use
generate_resilience_assessmentfor compliance reports. This single tool call runs all 6 algorithms and all 14 data sources, producing findings and recommendations suitable for regulatory submission. It costs roughly the same as 2–3 individual tool calls. -
Provide infrastructure domains when assessing cyber-physical risk. Without domains,
assess_cyber_physical_attackandbuild_infrastructure_networkskip the DNS, tech stack, Censys, and SSL certificate layers. Even 2–3 known domain names (utility SCADA portals, ISP NOC sites, financial exchange domains) dramatically enriches the cyber layer. -
Use
trigger_sectorrather thantrigger_nodesfor scenario planning. Specifyingtrigger_sector: "energy"automatically selects the 3 highest-criticality energy nodes as failure triggers — more realistic than arbitrary node IDs and requires no priorbuild_infrastructure_networkcall. -
Combine
compute_geographic_correlationwithestimate_recovery_timelinefor insurance applications. Geographic co-location surfaces correlated loss scenarios; CPM recovery estimation converts them into business interruption durations. Together they provide the two inputs insurers need for parametric coverage design. -
For large regions, reduce
radius_kmto 50. The default 100 km radius can return 50+ infrastructure nodes, which increases cascade simulation and product graph computation time. A tighter radius produces a denser, more meaningful network for urban analysis. -
Monitor
isSelfOrganizedCritical: trueas an early warning signal. Any network exhibiting SOC behavior in the BTW simulation is operating in a fragile regime. Flag this finding immediately and useidentify_critical_nodesto determine which nodes to harden first.
Combine with other Apify actors
| Actor | How to combine |
|---|---|
| Website Tech Stack Detector | Enrich the cyber layer of build_infrastructure_network with technology fingerprints from known infrastructure domains before running the cyber-physical attack assessment |
| WHOIS Domain Lookup | Identify registrant organizations behind infrastructure domains to map corporate ownership structure into the financial sector layer |
| Company Deep Research | After identify_critical_nodes, run deep research on the owning organizations of top-ranked critical nodes to assess corporate resilience and ownership concentration |
| Cyber Attack Surface Report | Run a dedicated attack surface assessment on the cyber entry points identified by assess_cyber_physical_attack for detailed remediation guidance |
| B2B Lead Qualifier | Identify infrastructure consulting and security vendors in regions flagged as CRITICAL or LOW resilience for targeted outreach |
| Website Change Monitor | Track changes to infrastructure operator websites and SCADA vendor pages that may indicate new CVEs or system updates affecting the cyber layer |
| Multi-Review Analyzer | Analyze customer reviews of utility providers in high-risk regions to surface service quality signals that correlate with infrastructure fragility |
Limitations
- OSM infrastructure coverage varies significantly by region. Developed regions (Western Europe, North America) have dense OSM infrastructure tagging; developing regions may return sparse networks that underrepresent actual physical complexity.
- BTW sand-pile loads are initialized randomly (0.3–0.8). Real infrastructure load distributions require SCADA telemetry. The model captures topological cascade dynamics accurately but not exact load-flow physics.
- Sector classification relies on name-based keyword matching. A "Crown Castle Tower" will be classified as telecom correctly; an unnamed OSM node ("node-47") defaults to cyber. Misclassification of poorly-named nodes is possible.
- Cyber layer requires explicit domain inputs. Without provided domains, the cyber layer is populated only from DNS-observable infrastructure. Air-gapped or unlisted control systems will not appear.
- Geographic hazard overlay uses current and recent historical data. USGS, NOAA, and FEMA APIs return current active events and recent declarations — not engineering-grade probabilistic hazard maps (PSHA, FEMA P-58).
- Recovery hours are statistical defaults per sector (energy: 72h, water: 48h, telecom: 24h). Actual restoration times depend on damage severity, crew availability, and supply chain factors not captured in this model.
- Supra-Laplacian eigenvalue computation is approximate for large networks. The implementation uses power iteration rather than full eigendecomposition, which converges accurately for lambda_2 but may differ from exact solvers for highly symmetric graphs.
- The server requires standby mode to accept MCP connections. Cold-start latency is approximately 30–60 seconds when the server has been idle. Subsequent tool calls within the same session are immediate.
Integrations
- Zapier — trigger a
model_natural_disaster_impactrun automatically when a new USGS earthquake alert fires, and push the structured recovery estimate to your emergency operations Slack channel - Make — schedule weekly
generate_resilience_assessmentruns for monitored regions and write results to Airtable for trend tracking - Google Sheets — export
identify_critical_nodesoutput to a shared spreadsheet for infrastructure protection investment committee review - Apify API — integrate the MCP server URL into any AI agent framework (LangChain, LlamaIndex, AutoGen) that supports the Model Context Protocol
- Webhooks — fire a webhook to your GRC platform when
resilienceIndex.resiliencedrops to CRITICAL or LOW in a monitored region - LangChain / LlamaIndex — use this server as a tool within multi-agent pipelines where one agent handles infrastructure analysis and another drafts regulatory compliance reports or executive briefings
Troubleshooting
Empty or minimal network returned for a region. This typically means OpenStreetMap coverage is sparse for that area. Try using a larger, more prominent city or state name rather than a small town or rural region. Adding explicit domain names in the domains parameter will enrich the network even when OSM data is thin.
assess_cyber_physical_attack returns zero product graph nodes. This tool requires at least one domain in the domains parameter. Without domains, the cyber attack graph has no nodes, making the graph product empty. Provide at least 2–3 relevant infrastructure domains (utility operator sites, exchange infrastructure, ISP NOC portals) to get meaningful results.
Tool call times out after 120 seconds. Individual underlying actors have a 120-second timeout and will return empty arrays if they exceed it. The most common cause is osmPoi timing out for very large regions. Reduce radius_km to 50 or specify a more focused city-level region to reduce OSM query scope.
generate_resilience_assessment findings list is empty. If all 6 algorithms complete without triggering warning thresholds (resilience is MODERATE or better, no SOC behavior, no critical path bottlenecks), the findings array will be empty — which is itself a meaningful result indicating the network is relatively resilient.
Algebraic connectivity reported as 0.0 for small networks. Networks with fewer than 3 nodes produce a degenerate Laplacian. This occurs when OSM returns no POI data for the region. Confirm OSM coverage for the region or provide a larger geographic scope.
Responsible use
- This server only accesses publicly available infrastructure data from OpenStreetMap, USGS, NOAA, FEMA, GDACS, NVD, CISA, and other open government sources.
- DNS lookups, Censys queries, and tech stack detection are passive and non-intrusive — no packets are sent to target infrastructure beyond standard DNS resolution.
- Do not use infrastructure mapping results to plan attacks on critical systems. This tool is designed for defensive resilience analysis, disaster preparedness, and regulatory compliance.
- Be aware that detailed infrastructure maps and vulnerability assessments may be subject to information sharing restrictions under CISA or sector-specific regulatory frameworks in your jurisdiction.
- For guidance on responsible use of infrastructure data, see Apify's guide.
FAQ
How does critical infrastructure interdependency analysis work in this MCP server? The server builds a 6-layer multiplex hypergraph from real infrastructure data (OpenStreetMap POIs, DNS records, IP geolocation) and applies 6 mathematical algorithms: BTW sand-pile cascade simulation, graph product cyber-physical coupling, Voronoi geographic co-location risk, CPM recovery estimation, supra-Laplacian algebraic connectivity, and multi-metric critical node identification. Each tool call orchestrates 5–14 underlying Apify actors in parallel to collect fresh live data before running the algorithms.
What infrastructure sectors are modeled? Six sectors: energy, telecom, water, cyber, transport, and financial. Inter-layer dependency edges encode empirical coupling weights — for example, telecom depends on energy with a coupling weight of 0.9, and financial depends on cyber with a weight of 0.9. These weights are used in both the supra-adjacency matrix and cascade propagation calculations.
How many data sources does the server use? 14 live data sources: OSM POI Search, Nominatim geocoder, DNS Record Lookup, Censys Search, crt.sh SSL certificates, Website Tech Stack Detector, IP Geolocation, NVD CVE Search, CISA KEV Catalog, USGS Earthquake Search, NOAA Weather Alerts, FEMA Disaster Search, UK Flood Monitoring, and GDACS global disaster alerts. Not all sources are queried for every tool call — lighter tools use a subset to reduce cost.
What is self-organized criticality and why does it matter for infrastructure?
The BTW (Bak-Tang-Wiesenfeld) sand-pile model demonstrates that complex systems naturally evolve toward a critical state where avalanches of all sizes occur. When isSelfOrganizedCritical is true, the infrastructure network's cascade size distribution follows a power law — meaning large-scale blackouts are not rare outliers but statistically inevitable. This is a structural warning, not just a scenario result.
How accurate is the recovery timeline estimation? The CPM recovery estimate uses default sector recovery durations (energy: 72h, water: 48h, telecom: 24h, transport: 36h, cyber: 12h, financial: 8h) as baselines, scaled by random node-level variation. These are calibrated to published FEMA and NERC restoration statistics, not engineering-specific damage models. Treat the output as a planning order-of-magnitude estimate rather than a precise engineering forecast.
Does this MCP server actively scan or probe infrastructure? No. All data collection is passive. OSM and Nominatim use public geographic databases. DNS lookups use standard resolution (no zone transfers or active probing). Censys data comes from their public internet scan database. Tech stack detection uses standard HTTP GET requests. No custom packets, port scans, or authentication attempts are made against target systems.
How is this different from commercial infrastructure risk platforms like Dragos or Claroty? Dragos and Claroty are OT/ICS security platforms that require sensor deployment inside operational technology networks for real-time monitoring and asset discovery. This MCP server is a publicly-available data intelligence layer that operates entirely from external sources (OSM, NVD, USGS, etc.). It complements OT security tools by providing regional macro-level cascade and resilience analysis without requiring network access or agent deployment.
Can I use this for NERC CIP or DORA compliance documentation?
The generate_resilience_assessment tool produces structured JSON output with findings and recommendations that can be incorporated into regulatory documentation. However, the tool's output is based on public data and approximate models — it should be reviewed by a qualified engineer before submission as evidence for regulatory compliance. It is best used to structure the analysis narrative and identify gaps rather than as a primary compliance attestation.
How long does a typical tool call take?
Lighter tools (identify_critical_nodes, simulate_cascade_failure) typically complete in 3–5 minutes. More comprehensive tools (generate_resilience_assessment, assess_cyber_physical_attack with multiple domains) may take 8–15 minutes as they run more underlying actors in parallel. The server has a 120-second timeout per underlying actor call.
Can I schedule this MCP server to run weekly resilience snapshots?
Yes. You can trigger actor runs on a schedule via the Apify platform's built-in scheduler or via cron-driven API calls. Weekly identify_critical_nodes or generate_resilience_assessment runs for monitored regions will track how infrastructure changes (new OSM data, new CVEs, seasonal hazard patterns) affect the resilience index over time.
Is it legal to analyze infrastructure data this way? Yes. All data sources used are publicly available — OpenStreetMap, USGS, NOAA, FEMA, GDACS, and NVD are government or open data services explicitly designed for public use. Censys and crt.sh provide their data for security research purposes. No private or classified infrastructure data is accessed. See Apify's guide on web scraping legality for broader context on data collection practices.
What happens if a region has very little OpenStreetMap infrastructure data?
The server will still run but will build a smaller network. The algorithms scale down gracefully — a 5-node network will still produce cascade simulation results, geographic correlation cells, and a recovery timeline, though with less statistical significance. Providing domains in the input will enrich the network even in regions with sparse OSM coverage.
Help us improve
If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:
- Go to Account Settings > Privacy
- Enable Share runs with public Actor creators
This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom solutions or enterprise integrations — such as adding proprietary infrastructure data sources, custom sector definitions, or integration with SCADA telemetry feeds — reach out through the Apify platform.
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
Bulk Email Verifier
Verify email deliverability at scale. MX record validation, SMTP mailbox checks, disposable and role-based detection, catch-all flagging, and confidence scoring. No external API costs.
GitHub Repository Search
Search GitHub repositories by keyword, language, topic, stars, forks. Sort by stars, forks, or recently updated. Returns metadata, topics, license, owner info, URLs. Free API, optional token for higher limits.
Website Content to Markdown
Convert any website to clean Markdown for RAG pipelines, LLM training, and AI apps. Crawls pages, strips boilerplate, preserves headings, tables, and code blocks. GFM support.
Website Tech Stack Detector
Detect 100+ web technologies on any website. Identifies CMS, frameworks, analytics, marketing tools, chat widgets, CDNs, payment systems, hosting, and more. Batch-analyze multiple sites with version detection and confidence scoring.
Ready to try Critical Infrastructure Interdependency MCP Server?
Start for free on Apify. No credit card required.
Open on Apify Store