Supply Chain Digital Twin MCP
Supply chain digital twin simulation via 8 quantitative algorithms — connect any AI agent to live corporate, trade, sanctions, hazard, and financial data across 17 public data sources. This MCP server turns natural-language supply chain questions into structured risk intelligence: cascade failure probabilities, optimal transport routes, adversarial interdiction games, supplier survival curves, Leontief multipliers, and multi-horizon demand forecasts.
Maintenance Pulse
90/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| simulate-disruption-cascade | Buldyrev percolation on interdependent networks | $0.10 |
| optimize-logistics-transport | Sinkhorn optimal transport for logistics | $0.08 |
| model-adversarial-interdiction | Stackelberg attacker-defender bilevel optimization | $0.10 |
| estimate-supplier-survival | Competing risks cause-specific hazard analysis | $0.06 |
| reinforce-network-resilience | Algebraic connectivity maximization | $0.08 |
| compute-input-output-impact | Leontief input-output matrix inversion | $0.08 |
| identify-causal-disruption-paths | Causal identification via fuzzy RDD | $0.06 |
| forecast-multi-scale-demand | Rao-Blackwellized particle filter forecasting | $0.10 |
Example: 100 events = $10.00 · 1,000 events = $100.00
Connect to your AI agent
Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.
https://ryanclinton--supply-chain-digital-twin-mcp.apify.actor/mcp{
"mcpServers": {
"supply-chain-digital-twin-mcp": {
"url": "https://ryanclinton--supply-chain-digital-twin-mcp.apify.actor/mcp"
}
}
}Documentation
Supply chain digital twin simulation via 8 quantitative algorithms — connect any AI agent to live corporate, trade, sanctions, hazard, and financial data across 17 public data sources. This MCP server turns natural-language supply chain questions into structured risk intelligence: cascade failure probabilities, optimal transport routes, adversarial interdiction games, supplier survival curves, Leontief multipliers, and multi-horizon demand forecasts.
Built for supply chain risk analysts, operations researchers, and AI agents that need real-world data fused with rigorous algorithms — not static databases. Each tool runs up to 17 actors in parallel, assembles a live supply chain network graph, then applies a specific algorithm from network science, operations research, or econometrics.
⬇️ What data can you access?
| Data Point | Source | Example use |
|---|---|---|
| 📋 Global corporate registrations and ownership | OpenCorporates | Tier-1/2/3 supplier identification |
| 🏢 UK company filings and financial data | UK Companies House | UK entity verification and financials |
| 🔗 Legal Entity Identifiers and parent-child chains | GLEIF LEI | Corporate ownership mapping |
| 🌐 International bilateral trade flows | UN COMTRADE | Inter-sector IO coefficient estimation |
| 📊 Economic development indicators | World Bank | Country-level macro risk factors |
| 🏛️ Government contract awards and opportunities | SAM.gov | Defense and federal supply chain exposure |
| 🚫 OFAC sanctions and blocked persons list | OFAC | Sanctioned entity flagging |
| 🌍 Global sanctions, PEP, and watchlist data | OpenSanctions | Multi-jurisdiction sanctions screening |
| 🌋 Earthquake activity and seismic magnitudes | USGS Earthquake | Natural hazard proximity scoring |
| 🌩️ Severe weather events and climate data | NOAA | Hazard exposure for geographic nodes |
| 🆘 Global disaster alerts and severity scores | GDACS | Multi-hazard disruption triggers |
| 🔴 US federal emergency declarations | FEMA | Domestic disaster exposure |
| 📍 Geocoding and lat/lon resolution | Nominatim | Haversine distance-based hazard mapping |
| 💱 Foreign exchange rates and currency data | Exchange Rate Tracker | FX risk in cross-border trade flows |
| 🔐 Internet host certificates and vulnerabilities | Censys | Supplier cyber exposure scoring |
| 🌫️ Air quality index by country and region | OpenAQ | Environmental operational risk |
| 🖥️ Website uptime and availability monitoring | Website Monitor | Supplier operational continuity signals |
Why use Supply Chain Digital Twin MCP?
Manual supply chain risk analysis means pulling data from a dozen government portals, building spreadsheets, and applying heuristics. A thorough analysis of a single supplier portfolio can take a team of analysts a full week — and still miss cascade dynamics, game-theoretic attack surfaces, and multi-horizon demand signals.
This MCP server replaces that workflow. Paste a query into any MCP-compatible AI agent. The server calls up to 17 live data sources in parallel, builds a weighted network graph with Leontief technical coefficients, and runs the requested algorithm. You get structured JSON results in seconds — not weeks.
- Scheduling — run on any cadence via the Apify platform to keep supply chain models current
- API access — trigger tool calls from Python, JavaScript, or any HTTP-based MCP client
- 17 parallel data sources — corporate, trade, sanctions, hazard, cyber, environmental, and financial data assembled in one call
- Monitoring — receive Slack or email alerts when runs fail or return anomalous results
- Integrations — connect to Claude Desktop, Cursor, Windsurf, LangChain, or any MCP-compatible client
Features
- Buldyrev-Parshani-Stanley mutual percolation — models two interdependent supply networks; iteratively removes non-giant-component nodes and propagates failures until a stable state, detecting first-order phase transitions at the critical threshold p_c*
- Cross-entropy importance sampling — optimizes the Monte Carlo proposal distribution q*(theta) to focus simulation effort on catastrophic rare events, delivering variance reduction ratios orders of magnitude better than naive Monte Carlo
- Sinkhorn-Knopp entropy-regularized optimal transport — solves min_{P} <C,P> - epsilonH(P) subject to marginal constraints via iterative scaling u = mu/(Kv), v = nu/(K^T*u), computing the Wasserstein-2 distance as total logistics cost
- Tri-level Stackelberg attacker-defender game — models adversarial interdiction as max_x min_y max_z f(x,y,z); solved via Benders decomposition with optimality cuts; reports Nash equilibrium resource allocation and convergence gap
- Competing risks Cox proportional hazard model — five cause-specific hazards (financial, natural disaster, sanctions/regulatory, cyber, quality/compliance) with cumulative incidence F_k(t) = integral h_k(u)*S(u)du and Harrell's concordance index
- Algebraic connectivity maximization — computes lambda_2 (Fiedler value) of the graph Laplacian via power iteration with deflation; greedy edge additions scored by gain_ij = (v_i - v_j)^2 using the Fiedler vector
- Leontief input-output via Neumann series — inverts (I-A) as L = sum_{k=0}^{K} A^k up to 15 terms; computes forward linkage (column sums), backward linkage (row sums), and Ghosh supply-side multiplier; identifies keystone industries with above-average linkage in both directions
- Fuzzy regression discontinuity design — estimates tau_FRD = (E[Y|X>=c] - E[Y|X<c]) / (E[D|X>=c] - E[D|X<c]) with Imbens-Kalyanaraman (2012) optimal bandwidth h_IK = C_k * (sigma^2 / (f(c)*m''(c)^2))^{1/5} * n^{-1/5} at geographic and temporal thresholds
- Rao-Blackwellized particle filter — samples nonlinear regime component via particles, marginalizes the linear state-space (trend, seasonal, cycle) analytically via Kalman filter per particle; systematic resampling when effective sample size ESS < N/2
- Stochastic DP via Bellman equation — solves V(s) = min_a { c(s,a) + gamma*E[V(s')] } for optimal base-stock inventory policy with uncertainty quantification
- Haversine distance-based hazard exposure — assigns hazard exposure to supplier nodes within 500 km of a hazard event (1 - dist/500), falling back to seeded probabilistic assignment (15% chance, 0.3 weight) when coordinates are unavailable
- Leontief matrix normalization — enforces sub-stochastic rows (spectral radius < 1) by scaling rows with sum >= 0.9 by factor 0.85/rowSum, guaranteeing Neumann series convergence
- Seeded deterministic pseudo-randomness — all stochastic components use a string-hash-seeded linear congruential generator for reproducible results across identical inputs
- 17 actors called in parallel — all data fetches use
Promise.allfor maximum throughput; each actor call has a 180-second timeout with graceful empty-array fallback on failure
Use cases for supply chain digital twin simulation
Supply chain stress testing and resilience scoring
Risk managers at manufacturing firms and diversified industrials use simulate_disruption_cascade to stress-test their supplier networks before quarterly board reviews. Input a company name or sector description. The tool assembles live corporate, sanctions, hazard, and trade data, constructs a two-layer interdependent network, and runs up to 5,000 Monte Carlo simulations with cross-entropy importance sampling to estimate the probability of catastrophic cascades. Output includes the percolation threshold, first-order transition flag, and a ranked list of critical suppliers.
Logistics network cost optimization
Operations teams at 3PLs and large manufacturers use optimize_logistics_transport to identify the lowest-cost routing and sourcing configuration given current supply and demand distributions. The Sinkhorn algorithm solves the entropy-regularized optimal transport problem, returning Wasserstein-2 distance as the optimal cost, dual shadow prices for each capacity-constrained node, and a full set of optimal routes with utilization rates.
Critical infrastructure and defense supply chain protection
Government contractors and defense procurement teams use model_adversarial_interdiction to identify which supply chain links an adversary would most rationally target and where defensive investment yields the highest return. The tri-level Stackelberg game produces a Nash equilibrium that specifies which edges are interdiction targets, how much protection each deserves, and the Benders convergence gap as a quality indicator.
Supplier portfolio risk management
Procurement teams and supply chain finance professionals use estimate_supplier_survival to assess the 12-month failure probability of every supplier in a portfolio, decomposed into five competing failure causes. Output includes per-supplier survival curves, cumulative incidence by failure mode, expected number of failures over the next year, and Harrell's concordance index as a model fit metric.
Economic impact and sectoral contagion analysis
Policy researchers and corporate economists use compute_input_output_impact to estimate how a production shock in one sector or country propagates through the full input-output network. The Leontief multiplier tells you total output impact per unit of direct demand change. Keystone industry identification flags the sectors whose disruption would most damage the entire network — essential for both business continuity planning and macroeconomic policy analysis.
Demand forecasting and inventory optimization
Supply chain planners use forecast_multi_scale_demand to generate uncertainty-quantified demand forecasts across multiple horizons (weekly, monthly, quarterly) and compute the optimal base-stock level for each. The Rao-Blackwellized particle filter decomposes demand into trend, seasonal, and regime components, and the stochastic DP solver finds the inventory policy that minimizes expected holding plus shortage costs.
How to connect this MCP server
This server runs in Apify Standby mode — it stays alive and accepts MCP connections over HTTP. The endpoint is always available.
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"supply-chain-digital-twin": {
"url": "https://supply-chain-digital-twin-mcp.apify.actor/mcp"
}
}
}
Cursor
Add to your Cursor MCP settings:
{
"mcpServers": {
"supply-chain-digital-twin": {
"url": "https://supply-chain-digital-twin-mcp.apify.actor/mcp"
}
}
}
Windsurf / other MCP clients
Any client that supports the MCP streamable HTTP transport can connect to:
https://supply-chain-digital-twin-mcp.apify.actor/mcp
No authentication headers required. The Apify token is managed server-side.
⬆️ MCP tools reference
| Tool | Algorithm | Data sources called | Output highlights |
|---|---|---|---|
simulate_disruption_cascade | Buldyrev mutual percolation + cross-entropy IS | 17 actors | Cascade steps, percolation threshold, first-order transition flag, recovery days, variance reduction ratio |
optimize_logistics_transport | Sinkhorn-Knopp entropy-regularized OT | 8 actors | Optimal routes, Wasserstein-2 distance, shadow prices, Sinkhorn convergence |
model_adversarial_interdiction | Tri-level Stackelberg + Benders decomposition | 8 actors | Game value, Nash equilibrium, interdiction targets, defender reinforcements, Benders gap |
estimate_supplier_survival | Competing risks Cox PH model | 11 actors | Per-supplier survival probability, 5-cause cumulative incidence, expected failures 12m, concordance index |
reinforce_network_resilience | Algebraic connectivity (Fiedler value) | 7 actors | Lambda_2, redundancy score, Fiedler vector, optimal edge additions with lambda_2 gain |
compute_input_output_impact | Leontief I-O via Neumann series | 6 actors | Leontief inverse, forward/backward linkages, Ghosh multiplier, keystone industries, spectral radius |
identify_causal_disruption_paths | Fuzzy RDD + IK bandwidth | 11 actors | Causal paths, RDD estimates, IK bandwidth, total causal effect, confounders |
forecast_multi_scale_demand | RBPF + stochastic DP Bellman | 7 actors | Multi-horizon forecasts with 95% CI, optimal base-stock, service level, effective particles |
Tool parameters
| Parameter | Type | Required | Default | Applies to |
|---|---|---|---|---|
query | string | Yes | — | All 8 tools |
simulations | number | No | 1000 | simulate_disruption_cascade only (max 5000) |
The query parameter accepts free-text descriptions of supply chain scenarios, company names, industries, commodities, or geographic regions. The server extracts intent and passes it to each underlying data actor as a search query.
Output examples
simulate_disruption_cascade
{
"nodesAffected": 23,
"totalImpact": 4.87,
"percolationThreshold": 0.312,
"firstOrderTransition": true,
"criticalSuppliers": [
"Pinnacle Semiconductor GmbH",
"Meridian Electronics Co Ltd",
"Vantage Logistics Holdings"
],
"recoveryTimeDays": 47,
"ceProposalDistortion": 0.0234,
"monteCarloRuns": 1000,
"varianceReduction": 18.4,
"cascadeSteps": [
{
"node": "Pinnacle Semiconductor GmbH",
"tier": 1,
"failureProbability": 0.78,
"cascadeDelay": 3,
"economicImpact": 1.24,
"networkA": true
},
{
"node": "Meridian Electronics Co Ltd",
"tier": 2,
"failureProbability": 0.61,
"cascadeDelay": 8,
"economicImpact": 0.89,
"networkA": false
}
],
"networkSize": { "nodes": 94, "edges": 187 }
}
estimate_supplier_survival
{
"portfolioSurvival": 0.71,
"expectedFailures12m": 4.2,
"concordanceIndex": 0.724,
"competingRisks": [
{ "cause": "financial", "cumulativeIncidence": 0.142 },
{ "cause": "natural_disaster", "cumulativeIncidence": 0.081 },
{ "cause": "sanctions", "cumulativeIncidence": 0.034 },
{ "cause": "cyber", "cumulativeIncidence": 0.058 },
{ "cause": "quality_compliance", "cumulativeIncidence": 0.091 }
],
"suppliers": [
{
"supplier": "Acme Fabrications Ltd",
"survivalProbability": 0.84,
"hazardRate": 0.024,
"medianLifetimeYears": 8.3,
"riskFactors": [
{ "factor": "sanctioned", "hazardRatio": 3.41, "coefficient": 1.227 },
{ "factor": "hazardExposure", "hazardRatio": 2.18, "coefficient": 0.780 },
{ "factor": "cyberExposure", "hazardRatio": 1.64, "coefficient": 0.495 }
],
"causeSpecificHazards": [
{ "cause": "financial", "hazard": 0.018, "cumulativeIncidence": 0.098 },
{ "cause": "natural_disaster", "hazard": 0.009, "cumulativeIncidence": 0.049 }
]
}
],
"networkSize": { "nodes": 71, "edges": 132 }
}
optimize_logistics_transport
{
"totalCost": 2847300,
"totalTimeDays": 18.4,
"wasserstein2Distance": 0.3812,
"sinkhornIterations": 47,
"sinkhornConvergence": 0.0000083,
"dualVariables": [
{ "constraint": "Shanghai-Port", "shadowPrice": 142.50 },
{ "constraint": "Rotterdam-Hub", "shadowPrice": 98.20 },
{ "constraint": "Chicago-DC", "shadowPrice": 67.80 }
],
"optimalRoutes": [
{
"from": "Shenzhen Manufacturing Zone",
"to": "Los Angeles Distribution Center",
"mode": "sea",
"cost": 1240000,
"timeDays": 21,
"capacity": 5000,
"utilization": 0.87,
"sinkhornWeight": 0.412
}
],
"networkSize": { "nodes": 58, "edges": 103 }
}
❓ How much does it cost to run supply chain digital twin simulations?
This MCP uses pay-per-event pricing — you pay per tool call. Compute costs are included. The pricing per tool reflects the number of underlying actor calls and algorithm complexity.
| Tool | Price per call | Typical monthly budget (20 calls) |
|---|---|---|
simulate_disruption_cascade | $0.040 | $0.80 |
optimize_logistics_transport | $0.040 | $0.80 |
model_adversarial_interdiction | $0.045 | $0.90 |
estimate_supplier_survival | $0.035 | $0.70 |
reinforce_network_resilience | $0.040 | $0.80 |
compute_input_output_impact | $0.035 | $0.70 |
identify_causal_disruption_paths | $0.035 | $0.70 |
forecast_multi_scale_demand | $0.030 | $0.60 |
For AI agent workflows running dozens of tool calls per session, a typical session cost is under $1.00. You can set a maximum spending limit per run in your Apify account settings to prevent unexpected charges — the server checks this limit before executing and returns a budget-exceeded message rather than proceeding.
Compare this to enterprise supply chain risk platforms (Resilinc, Riskmethods, Everstream) at $50,000–$200,000 per year for similar analytical capabilities. This MCP delivers quantitative simulation on live data for a few cents per analysis.
How Supply Chain Digital Twin MCP works
Phase 1: Parallel data assembly
Each tool call dispatches between 6 and 17 actor calls via Promise.all, all running in parallel with a 180-second timeout. Corporate data comes from OpenCorporates, UK Companies House, and GLEIF LEI. Trade flow data comes from UN COMTRADE and World Bank. Sanctions screening queries OFAC and OpenSanctions. Natural hazard data comes from USGS Earthquake, NOAA, GDACS, and FEMA. Location resolution uses Nominatim. Cyber exposure uses Censys. Environmental risk uses OpenAQ. Operational monitoring uses Website Monitor. Government contract exposure uses SAM.gov. Any actor that times out or returns an error is treated as an empty array — the algorithm proceeds with available data.
Phase 2: Network graph construction
buildSupplyChainNetwork assembles all actor results into a SupplyChainNetwork object. Corporate entities are assigned to tiers 1, 2, or 3 based on their position in the corporate data array. Hazard nodes are linked to supplier nodes using haversine distance — exposure weight = 1 - dist/500 for nodes within 500 km; probabilistic assignment (15% chance, weight 0.3) otherwise. Air quality risk is mapped by country code. Cyber exposure is matched by name prefix. Supply edges between tiers carry random IO coefficients (0.05–0.40) seeded deterministically from a string hash of the node pair. The Leontief technical coefficient matrix A is built from these IO coefficients, then row-normalized to enforce spectral radius < 1 (Neumann series convergence condition).
Phase 3: Algorithm execution
Each of the 8 algorithms operates on the assembled SupplyChainNetwork:
- Percolation partitions nodes into two networks by node index, builds a dependency map, and iterates removal of non-giant-component nodes across both layers. Cross-entropy importance sampling perturbs edge retention probabilities by
ceProposalDistortionto concentrate simulation mass on rare cascade events. - Optimal transport constructs a cost matrix from haversine distances between supplier and distributor nodes, then runs Sinkhorn iterations until convergence (tolerance 1e-7, max 1000 iterations).
- Interdiction enumerates supply edges as potential attack targets, scores each by
criticality * weight, and solves the bi-level assignment via greedy Benders cuts. The game value is the expected flow reduction at Nash equilibrium. - Survival analysis computes cause-specific Cox hazard rates from node attributes (sanctioned status, hazard exposure, cyber exposure, air quality risk, contract dependency), then integrates cumulative incidence functions numerically.
- Algebraic connectivity builds the graph Laplacian L = D - A, then runs 50 iterations of power iteration with deflation against the constant eigenvector to estimate lambda_2 and the Fiedler vector.
- Input-output computes the Leontief inverse as a Neumann series truncated at 15 terms, checks spectral radius via 30-step power iteration, and identifies keystone sectors as those with above-average forward and backward linkages.
- Causal paths applies fuzzy RDD at geographic thresholds (500 km bands) and temporal thresholds, computing IK optimal bandwidth and local linear regression estimates on either side of each discontinuity.
- Demand forecasting runs a particle filter with 50 particles, Kalman filter per particle for linear state components, systematic resampling when ESS < 25, then backward-passes Bellman recursion to find the optimal base-stock level.
Phase 4: Structured JSON output
All results are serialized to JSON and returned as MCP tool call content. Floating point values are rounded to 2–4 decimal places to keep responses readable in agent contexts. Large matrices (Leontief inverse) are truncated to 6×6 for display.
Tips for best results
-
Be specific in your query. "Semiconductor supply chain for automotive electronics, Taiwan and Japan suppliers" produces a richer network than "electronics supply chain." Specificity drives more relevant actor results, which produce more accurate network edges.
-
Use
simulate_disruption_cascadefor unknown unknowns. If you are not sure which algorithm to start with, cascade simulation is the broadest diagnostic. It surfaces critical suppliers, percolation thresholds, and whether the network sits near a first-order transition — information that shapes which other tools to run next. -
Follow cascade simulation with
reinforce_network_resilience. Once you know which suppliers are critical (from cascade output), use resilience reinforcement to identify which new supplier relationships (edge additions) would most increase algebraic connectivity at lowest cost. -
Combine
estimate_supplier_survivalwithmodel_adversarial_interdiction. Survival analysis identifies suppliers most likely to fail naturally. Interdiction modeling identifies suppliers most likely to be deliberately targeted. Together they give a complete vulnerability picture. -
Use
compute_input_output_impactfor board-level reporting. Leontief multipliers translate technical supply chain risk into financial language — total output change per unit of disruption. Keystone industry identification is directly presentable to non-technical stakeholders. -
Set a spending limit per session. For exploratory agent workflows, set a per-run budget in your Apify account. The server checks the limit before each tool call and stops gracefully rather than continuing to charge.
-
Chain
forecast_multi_scale_demandwith inventory decisions. The optimal base-stock output is directly usable in ERP or planning systems. Export theforecastsarray as CSV via the Apify dataset API for import into Excel or Google Sheets.
Combine with other Apify actors
| Actor | How to combine |
|---|---|
| Company Deep Research | Run deep research on the critical suppliers identified by simulate_disruption_cascade to get full financial, legal, and operational profiles before escalating to procurement leadership |
| Website Contact Scraper | Extract contact details from the websites of high-risk suppliers flagged by estimate_supplier_survival to initiate proactive supplier diversification outreach |
| Trustpilot Review Analyzer | Cross-reference supplier quality compliance risk scores with customer reviews and complaint patterns for triangulated supplier health assessment |
| B2B Lead Qualifier | Score potential replacement suppliers identified through trade data before adding them to a shortlist for RFQ |
| Website Tech Stack Detector | Enrich cyber exposure data from Censys with technology stack analysis of supplier web properties to produce a more accurate cyberExposure score |
| WHOIS Domain Lookup | Verify domain registration authenticity for suppliers flagged as high-risk in the interdiction model |
| Website Change Monitor | Monitor supplier websites for changes (office closures, product discontinuation notices, restructuring announcements) that may precede the failure events predicted by the survival model |
Limitations
- No proprietary ERP or inventory data — the network graph is built entirely from public sources. Internal BOM data, actual inventory levels, and proprietary lead times are not incorporated. For enterprise use, the scoring algorithms can be adapted with private data inputs via the Apify API.
- Leontief model assumes linear production technology — fixed technical coefficients do not capture substitution effects, economies of scale, or non-linear production functions. Disruptions that cause fundamental structural shifts will be underestimated.
- Cox proportional hazards assumes time-invariant hazard ratios — if the relationship between risk factors and failure probability changes over time (e.g., sanctions risk growing due to geopolitical escalation), the competing risks model will not capture this.
- Cross-entropy importance sampling requires valid proposal distribution — for very small networks (fewer than 10 supplier nodes), the CE proposal may not have enough structure to improve on naive Monte Carlo.
- Benders decomposition convergence is not guaranteed for all game structures — very large or degenerate interdiction games may produce high Benders gap values, indicating the reported game value is approximate.
- Haversine-based hazard mapping requires geocoded nodes — when supplier coordinates are unavailable (lat/lon = 0), the system falls back to probabilistic hazard assignment, which is less precise.
- Actor data freshness depends on upstream sources — government databases (SAM.gov, OFAC, USGS) are updated on varying schedules. The model reflects data available at the time of the call, not real-time streaming feeds.
- Network construction is capped at 20 sectors for the Leontief matrix, and haversine edge creation is bounded to supplier pairs within a sliding window of 6 — large queries may produce sparser networks than smaller, focused queries.
Integrations
- Claude Desktop — connect via
claude_desktop_config.json; ask Claude to "run a disruption cascade simulation for the automotive semiconductor supply chain" - Cursor — add to MCP settings and use in agent mode for code-adjacent supply chain analysis
- Apify API — trigger tool calls programmatically from Python, JavaScript, or any HTTP client via the standby endpoint
- LangChain / LlamaIndex — use as a tool in LangChain agent pipelines for automated supply chain monitoring workflows
- Webhooks — trigger downstream actions (Slack alerts, CRM updates) when a run completes or a cascade simulation crosses a risk threshold
- Make — build no-code supply chain risk workflows: trigger on schedule, run simulation, push results to Google Sheets or HubSpot
- Zapier — connect supply chain simulation results to email, Slack, Airtable, or any Zapier-supported destination
Troubleshooting
Receiving Spending limit reached in tool output — the per-run budget set in your Apify account has been exhausted. Go to the actor's run settings and increase the maximum spend, or run the tool independently with a higher limit.
Network returns very few nodes (fewer than 10) — the query may be too generic or too narrow for the underlying actors to return sufficient corporate and trade records. Try adding geographic or industry context: "semiconductor suppliers in Taiwan and South Korea for consumer electronics" rather than "electronics suppliers."
Sinkhorn algorithm not converging (high sinkhornConvergence value) — this occurs when supply and demand distributions are very imbalanced. The tool still returns a result using the best available approximation after 1,000 iterations. Consider narrowing the logistics query to a more balanced geographic scope.
Cascade simulation returns firstOrderTransition: false for every run — the network may be too sparse or the query may not be returning enough hazard or sanctions data to produce interdependent network layers. Add explicit geographic context to your query to improve hazard data coverage.
Actor timeout errors in logs — individual underlying actors have a 180-second timeout. Slow responses from government APIs (SAM.gov, GLEIF) can occasionally cause timeouts. The server degrades gracefully — timed-out actors return empty arrays and the algorithm runs with available data.
Results differ slightly between identical queries run at different times — the network graph uses deterministic seeding for all stochastic components, but actual live data from the 17 upstream actors changes over time. Differences in results reflect real changes in the underlying data (new corporate registrations, updated sanctions, new hazard events), not randomness in the algorithm.
Responsible use
- All 17 data sources accessed by this MCP are publicly available government databases, international trade datasets, and open corporate registries.
- Sanctions and watchlist data (OFAC, OpenSanctions) should be used only for legitimate compliance and due diligence purposes.
- Supply chain intelligence derived from this tool should not be used to facilitate unlawful discrimination, market manipulation, or targeted harassment of individuals or companies.
- Comply with GDPR, applicable trade compliance laws, and data protection regulations when storing or processing output that includes personal or entity data.
- For guidance on web scraping and data use legality, see Apify's guide.
❓ FAQ
How many data sources does each supply chain simulation call?
The most comprehensive tool, simulate_disruption_cascade, calls all 17 actors in parallel: 3 corporate registries, 3 trade/economic sources, 2 sanctions databases, 4 natural hazard sources, 1 geocoder, 1 FX source, 1 cyber intelligence source, 1 environmental source, 1 government contract source, and 1 website monitor. Less complex tools like forecast_multi_scale_demand call 7 actors.
How accurate is the percolation threshold estimate?
The threshold p_c* is estimated as the fraction of nodes whose removal collapses the giant component. With 1,000 Monte Carlo simulations (default) and cross-entropy importance sampling, the variance reduction ratio is typically 10–20x compared to naive Monte Carlo. Increasing simulations to 5,000 reduces estimation error further.
Can I use this supply chain digital twin MCP with my own private data? The server operates on public data sources only. However, the underlying algorithms (scoring.ts) can be invoked with custom network inputs via a private Apify actor build. Contact us through the Issues tab for custom enterprise configurations.
How is supply chain digital twin simulation different from traditional risk scoring tools? Traditional tools assign static risk scores based on country indices or financial ratios. This MCP applies dynamic network algorithms — percolation theory captures cascade dynamics, optimal transport captures routing trade-offs, Stackelberg games capture adversarial behavior — none of which are possible with static scoring. The output is structural and causal, not just a number.
How long does a typical supply chain simulation run take?
Most tools complete in 30–120 seconds depending on data availability and query specificity. simulate_disruption_cascade with 17 parallel actor calls and 1,000 Monte Carlo simulations typically takes 60–90 seconds. Increasing simulations to 5,000 adds roughly 15–30 seconds.
Is it legal to use this data for supply chain due diligence? Yes. All 17 data sources are government-published or internationally mandated open datasets (UN COMTRADE, OFAC, GLEIF, USGS, etc.). Using them for due diligence, compliance screening, and business intelligence is explicitly permitted and encouraged by the publishing agencies. See Apify's guide.
Can I schedule supply chain simulations to run on a regular cadence? Yes. Use Apify's built-in scheduler to run any tool call on daily, weekly, or custom cron intervals. Combine with webhooks to push results to Slack, email, or a dashboard automatically each time a simulation completes.
How is the Fiedler value used to measure supply chain resilience?
Lambda_2 (the second-smallest eigenvalue of the graph Laplacian) measures how difficult it is to partition the network into disconnected components. A higher Fiedler value means more redundant connections and a network that degrades gracefully under node or edge removal. The reinforce_network_resilience tool identifies which specific supplier relationships to add to maximally increase lambda_2 per unit of investment.
What happens if one of the 17 underlying actors returns an error?
Each actor call is wrapped in a try-catch. Failed calls return an empty array. The network construction function receives the empty array and simply skips that data type. The algorithm runs with reduced data — results are flagged in the networkSize output so you can see how many nodes and edges were successfully assembled.
How does the competing risks survival model decide which failure mode is most likely for a given supplier?
Five cause-specific Cox hazard rates are computed from node attributes: sanctioned status (regulatory/sanctions risk), hazardExposure (natural disaster risk), cyberExposure (cyber risk), airQualityRisk (environmental/compliance risk), and revenue relative to network average (financial risk). Each attribute maps to a hazard coefficient via a fixed beta vector, and cumulative incidence is integrated numerically to produce cause-specific probabilities summing to the overall failure probability.
Can I pipe supply chain simulation results directly into an LLM for narrative reporting?
Yes. The JSON output from any tool is compact enough to include directly in an LLM context window. For longer outputs (cascade step arrays, full Leontief inverse matrices), use the networkSize and top-level summary fields as the LLM input, and retrieve full detail via the Apify dataset API separately. Website Content to Markdown can help convert structured data to LLM-friendly formats.
Help us improve
If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:
- Go to Account Settings > Privacy
- Enable Share runs with public Actor creators
This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom enterprise integrations — private data ingestion, custom algorithm configurations, or white-label deployments — reach out through the Apify platform.
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
Bulk Email Verifier
Verify email deliverability at scale. MX record validation, SMTP mailbox checks, disposable and role-based detection, catch-all flagging, and confidence scoring. No external API costs.
GitHub Repository Search
Search GitHub repositories by keyword, language, topic, stars, forks. Sort by stars, forks, or recently updated. Returns metadata, topics, license, owner info, URLs. Free API, optional token for higher limits.
Website Content to Markdown
Convert any website to clean Markdown for RAG pipelines, LLM training, and AI apps. Crawls pages, strips boilerplate, preserves headings, tables, and code blocks. GFM support.
Website Tech Stack Detector
Detect 100+ web technologies on any website. Identifies CMS, frameworks, analytics, marketing tools, chat widgets, CDNs, payment systems, hosting, and more. Batch-analyze multiple sites with version detection and confidence scoring.
Ready to try Supply Chain Digital Twin MCP?
Start for free on Apify. No credit card required.
Open on Apify Store