AIDEVELOPER TOOLS

Civilizational Fragility MCP Server

Civilizational fragility assessment for AI agents — this MCP server gives Claude, GPT-4, and any MCP-compatible agent access to cross-domain cascading collapse risk analysis backed by 17 live data sources and 10 mathematical frameworks. Built for researchers, national security analysts, institutional risk teams, and AI systems that need grounded, quantitative answers to questions about systemic collapse, tipping points, and cross-domain contagion.

Try on Apify Store
$0.12per event
0
Users (30d)
0
Runs (30d)
90
Actively maintained
Maintenance Pulse
$0.12
Per event

Maintenance Pulse

90/100
Last Build
Today
Last Version
1d ago
Builds (30d)
8
Issue Response
N/A

Cost Estimate

How many results do you need?

assess-cascading-fragilitys
Estimated cost:$12.00

Pricing

Pay Per Event model. You only pay for what you use.

EventDescriptionPrice
assess-cascading-fragilitySheaf cohomology obstruction + coupled map lattice cascade$0.12
detect-tipping-proximityPersistent homology on time-delay embeddings$0.10
simulate-multiplex-cascadeHeterogeneous CML on multiplex network$0.10
compute-domain-shapleyShapley value decomposition across 6 domains$0.10
plan-intervention-decpomdpDec-POMDP cross-domain intervention planning$0.12
causal-cross-domain-queryDo-calculus causal reasoning across domains$0.10
track-persistent-homologyVietoris-Rips filtration tracking$0.08
forecast-civilizational-trajectoryMean-field game coupled HJB/FP trajectory$0.12

Example: 100 events = $12.00 · 1,000 events = $120.00

Connect to your AI agent

Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.

MCP Endpoint
https://ryanclinton--civilizational-fragility-mcp.apify.actor/mcp
Claude Desktop Config
{
  "mcpServers": {
    "civilizational-fragility-mcp": {
      "url": "https://ryanclinton--civilizational-fragility-mcp.apify.actor/mcp"
    }
  }
}

Documentation

Civilizational fragility assessment for AI agents — this MCP server gives Claude, GPT-4, and any MCP-compatible agent access to cross-domain cascading collapse risk analysis backed by 17 live data sources and 10 mathematical frameworks. Built for researchers, national security analysts, institutional risk teams, and AI systems that need grounded, quantitative answers to questions about systemic collapse, tipping points, and cross-domain contagion.

The server runs as an always-on Apify Standby actor and exposes 8 specialist tools via the Model Context Protocol. Each tool call fires 17 actors in parallel — spanning FRED, IMF, World Bank, NOAA, FEMA, USGS, GDACS, OpenAQ, NVD, CISA KEV, WHO, ClinicalTrials.gov, Congress Bills, the Federal Register, and OFAC Sanctions — then runs the collected data through a stack of algorithms: sheaf cohomology, Kaneko coupled map lattice, Vietoris-Rips persistent homology, Conley index, Dec-POMDP intervention planning, Shapley decomposition, Leontief input-output analysis, mean-field games, Gaussian processes, and Moran evolutionary dynamics. The output is a structured fragility report with domain-level stress scores, risk grades, tipping-point proximity maps, and prioritized intervention recommendations.

What data can you access?

Data PointSourceCoverage
📊 Economic time seriesFRED Economic Data800K+ US/global series
📉 Labor market indicatorsBLS Economic DataUS employment, CPI, wages
🌐 Global macro indicatorsIMF Data190 countries
🏦 Development & poverty metricsWorld Bank Data200+ countries
📋 OECD economic statisticsOECD Statistics38 member countries
🌩 Weather & climate eventsNOAA WeatherUS and global
🚨 Disaster declarationsFEMA Disaster SearchAll US declared disasters
🌍 Seismic eventsUSGS Earthquake SearchGlobal, real-time
⚠️ Multi-hazard disaster alertsGDACSWorldwide, near-real-time
🌫 Air quality readingsOpenAQGlobal monitoring stations
🔓 CVE vulnerability databaseNVD CVE SearchFull CVE history
🛡 Actively exploited vulnsCISA KEV CatalogConfirmed in-the-wild
🏥 Global health indicatorsWHO GHO1,000+ health series
🧪 Clinical trial activityClinicalTrials.govRegistered trials
🏛 US federal legislationCongress Bill TrackerHouse + Senate bills
📜 Regulatory actionsFederal Register SearchFederal rules & notices
🚫 Sanctions & designationsOFAC Sanctions SearchSDN list + programs

Why use this MCP server for civilizational fragility assessment?

Manual multi-domain risk assessment means pulling data from a dozen disparate APIs, normalizing incompatible schemas, choosing mathematical frameworks for cross-domain coupling, and spending weeks on analysis that is already stale by the time it is written up. Commercial risk intelligence platforms charge $15,000–$50,000 per year for static reports that cannot respond to live queries.

This MCP server automates the entire analytical pipeline. An AI agent issues a single tool call; the server fetches fresh data across all 17 sources in parallel, builds domain nodes and coupling edges from the raw indicators, and runs 10 algorithms in sequence to produce a structured risk report in minutes. No API keys to manage, no data pipelines to maintain, no model to retrain.

Platform benefits:

  • Standby mode — the server is always warm; tool calls connect immediately with no cold-start latency
  • Parallel data fetching — all 17 actor calls run concurrently; data collection takes 2–4 minutes, not hours
  • API access — trigger tool calls from any MCP-compatible AI client: Claude Desktop, Cursor, Cline, or custom agents
  • Pay-per-call pricing — no subscription; pay only for the tool calls you make
  • Monitoring — configure Slack or email alerts if the server encounters errors via Apify's built-in run monitoring
  • Integrations — connect to Zapier, Make, webhooks, or call the Apify API directly for programmatic access

Features

  • 10 mathematical frameworks in one server — sheaf cohomology, Kaneko CML, Vietoris-Rips persistent homology, Conley index, Dec-POMDP, Shapley decomposition, Leontief I-O, mean-field game, Gaussian process, and Moran process are all implemented in the scoring engine
  • Sheaf H^1 obstruction detection — identifies where local domain assessments fail to glue globally; high H^1 indicates inconsistent risk signals across domains that cannot be reconciled
  • Kaneko coupled map lattice with configurable logistic r parameter (default 3.8, chaotic regime); tracks per-domain Lyapunov exponents, synchronization index, and cascade events where stress crosses critical threshold
  • Vietoris-Rips persistent homology — computes Betti numbers beta_0 (independent risk clusters) and beta_1 (circular dependencies); long-lived intervals in the persistence diagram signal structural vulnerabilities vs. transient noise
  • Conley index Morse decomposition — isolates attractors and repellers in the domain state space; repeller count is a direct fragility signal
  • Dec-POMDP intervention planner — models each domain as an agent selecting from {do_nothing, monitor, mitigate, emergency_response} with belief-space value iteration; outputs optimal policy, total cost/benefit, and value of information per domain
  • Shapley decomposition with interaction indices — computes each domain's marginal contribution to total fragility using the full Shapley formula; pairwise interaction indices reveal synergistic domain pairs
  • Leontief nonlinear input-output analysis — maps resource flow bottlenecks and forward/backward linkages across all 6 domains; system multiplier quantifies amplification
  • Mean-field game (coupled HJB + Fokker-Planck) — models epidemic-economic feedback loops; outputs Nash equilibrium status, epidemic peak, economic trough, and density evolution
  • Gaussian process with Matern 5/2 kernel — spatial regression across domain stress values; hyperparameter optimization via log marginal likelihood; quantifies spatial correlation structure
  • Moran evolutionary process — simulates 10,000 institutional actors over configurable generations; fixation probabilities and stationary distribution reveal long-run institutional dominance
  • 6-domain architecture — economics, climate, health, cybersecurity, governance, and environment modeled as coupled DomainNode objects with stressLevel, fragility, and resilience attributes
  • 17 data sources queried in parallel — FRED, BLS, IMF, World Bank, OECD, NOAA, FEMA, USGS, GDACS, OpenAQ, NVD, CISA KEV, WHO, ClinicalTrials, Congress Bills, Federal Register, OFAC
  • Risk grade output — A through F letter grades derived from the composite overallFragility score for at-a-glance communication
  • Structured recommendations — topRisks and recommendations arrays in every report, ready for agent reasoning chains

Use cases for civilizational fragility assessment

National security and strategic intelligence

Defense research teams and think tanks use assess_cascading_fragility to produce quarterly risk briefings that span economic, geopolitical, health, and environmental domains simultaneously. The Dec-POMDP tool output maps directly to resource allocation decisions: where to direct monitoring investment and when to escalate to emergency response posture.

Early warning system development

Teams building automated early-warning systems embed detect_tipping_proximity into daily or weekly scheduled pipelines. Lyapunov exponents above zero flag chaotic domain dynamics; sheaf cohomology obstructions above 0.5 indicate the risk landscape is no longer globally consistent. Both signals fire before conventional indicators move.

Academic and institutional research

Researchers studying systemic risk, complexity economics, and sociotechnical collapse use simulate_multiplex_cascade and track_persistent_homology to generate quantitative inputs for papers and models. The Vietoris-Rips filtration output and Betti number time series are directly interpretable within the topological data analysis literature.

AI agent augmentation for macro risk reasoning

AI coding assistants and research agents configured with this MCP server can answer questions like "which domain is closest to a tipping point?" or "what is the optimal intervention sequence given current fragility levels?" using live, grounded data rather than training-time knowledge. The structured JSON output is optimized for agent consumption.

Long-term institutional strategy

Strategy teams and scenario planners use forecast_civilizational_trajectory to understand which governance strategies are evolutionarily stable under current conditions. The Moran process fixation probabilities and mean-field game Nash equilibrium outputs quantify which institutional approaches dominate long-run.

Causal pathway and bottleneck analysis

Operations researchers and supply chain analysts use causal_cross_domain_query to trace resource flow bottlenecks between domains via Leontief I-O. The system multiplier reveals how much a unit shock in one domain amplifies across the network. The Gaussian process spatial correlation matrix shows which domains are statistically co-located in the stress landscape.

How to connect this MCP server

Claude Desktop

Add this to your claude_desktop_config.json:

{
  "mcpServers": {
    "civilizational-fragility": {
      "url": "https://civilizational-fragility-mcp.apify.actor/mcp?token=YOUR_APIFY_TOKEN"
    }
  }
}

Cursor

Add this to your .cursor/mcp.json:

{
  "mcpServers": {
    "civilizational-fragility": {
      "url": "https://civilizational-fragility-mcp.apify.actor/mcp?token=YOUR_APIFY_TOKEN"
    }
  }
}

Cline / VS Code

Add to your Cline MCP settings:

{
  "mcpServers": {
    "civilizational-fragility": {
      "url": "https://civilizational-fragility-mcp.apify.actor/mcp?token=YOUR_APIFY_TOKEN"
    }
  }
}

Replace YOUR_APIFY_TOKEN with your token from Apify Console > Settings > Integrations.

MCP tools

ToolBest forEstimated cost
assess_cascading_fragilityFull report combining all 10 algorithms$300–500
detect_tipping_proximityEarly warning; identify domains nearest critical transition$200–350
simulate_multiplex_cascadeShock propagation modeling; resilience scenario testing$200–300
compute_domain_shapleyBlame attribution; which domain drives fragility most$200–300
plan_intervention_decpomdpResource allocation; optimal intervention policy$200–300
causal_cross_domain_queryBottleneck tracing; cross-domain feedback quantification$200–350
track_persistent_homologyTopological risk structure; persistent vs. transient features$200–300
forecast_civilizational_trajectoryLong-run institutional evolution and trajectory$200–350

Tool parameters

ParameterTypeDefaultDescription
querystring"global risk assessment"Focus query for data collection across all 17 actors
cml_time_stepsnumber200Coupled map lattice simulation steps (all tools)
cml_logistic_rnumber3.8Logistic map r parameter; >3.57 = chaotic regime
planning_horizonnumber5Dec-POMDP planning horizon in steps
population_sizenumber10000Moran process population (trajectory forecast tool only)
generationsnumber500Moran evolutionary generations (trajectory forecast tool only)

Input tips

  • Use a specific query for focused data — "climate economic cascading risk 2026" pulls more relevant IMF, World Bank, and NOAA records than the generic default
  • Leave logistic_r at 3.8 for standard analysis — values above 3.9 increase chaos sensitivity; values below 3.57 produce periodic (non-chaotic) CML behavior
  • Start with detect_tipping_proximity before running assess_cascading_fragility — it is cheaper and identifies which domains warrant deeper investigation
  • Increase planning_horizon to 10 for multi-year scenario planning; the default of 5 models near-term intervention sequences
  • Run compute_domain_shapley first when you need to justify intervention spending; the Shapley values identify the highest-impact domain with precision

Output example

A representative response from assess_cascading_fragility:

{
  "overallFragility": 0.67,
  "riskGrade": "C",
  "topRisks": [
    "Elevated sheaf H1 obstructions (0.42) indicate inconsistent cross-domain risk signals — local assessments for cyber and economic domains cannot be reconciled globally",
    "Positive Lyapunov exponents in cyber (0.31) and governance (0.18) signal chaotic dynamics — small shocks can cascade non-linearly",
    "Leontief system multiplier 2.14 indicates each unit of domain stress amplifies 2.14x across the network"
  ],
  "recommendations": [
    "Priority 1: Mitigate cyber domain — Shapley value 0.28 makes it the dominant fragility contributor",
    "Priority 2: Monitor governance domain — positive Lyapunov exponent with second-highest Shapley value (0.21)",
    "Priority 3: Emergency response posture for health — Dec-POMDP optimal action given planning horizon 5"
  ],
  "domains": [
    { "id": "cyber", "name": "Cybersecurity", "stress": 0.74, "fragility": 0.71, "resilience": 0.29 },
    { "id": "economic", "name": "Economics", "stress": 0.58, "fragility": 0.61, "resilience": 0.42 },
    { "id": "governance", "name": "Governance", "stress": 0.63, "fragility": 0.65, "resilience": 0.35 },
    { "id": "health", "name": "Health", "stress": 0.51, "fragility": 0.55, "resilience": 0.48 },
    { "id": "climate", "name": "Climate", "stress": 0.47, "fragility": 0.49, "resilience": 0.54 },
    { "id": "environment", "name": "Environment", "stress": 0.44, "fragility": 0.46, "resilience": 0.58 }
  ],
  "sheafH1": 3,
  "globalConsistency": 0.62,
  "cmlSynchronization": 0.38,
  "cascadeEvents": 7,
  "betti0": 2,
  "betti1": 1,
  "attractors": 3,
  "repellers": 2,
  "shapleyDominant": "cyber",
  "leontiefMultiplier": 2.14,
  "nashEquilibrium": false,
  "moranDominant": "monitor",
  "report": {
    "overallFragility": 0.67,
    "riskGrade": "C",
    "sheafCohomology": { "h0": 1, "h1": 3, "globalConsistency": 0.62 },
    "persistentHomology": { "betti0": 2, "betti1": 1, "totalPersistence": 1.84, "stabilityScore": 0.51 },
    "conleyIndex": { "attractorCount": 3, "repellerCount": 2 },
    "shapley": { "dominantDomain": "cyber", "totalFragility": 0.67 },
    "leontief": { "systemMultiplier": 2.14, "bottlenecks": ["cyber", "governance"] },
    "meanField": { "nashEquilibrium": false, "epidemicPeak": 0.34, "economicTrough": -0.19 }
  }
}

Output fields

FieldTypeDescription
overallFragilitynumber (0–1)Composite fragility score across all domains and algorithms
riskGradestring (A–F)Letter grade derived from overallFragility
topRisks[]string[]Human-readable top risk narratives for agent reasoning
recommendations[]string[]Prioritized intervention recommendations
domains[].idstringDomain identifier (cyber, economic, governance, health, climate, environment)
domains[].stressnumber (0–1)Current domain stress level derived from live data
domains[].fragilitynumber (0–1)Structural fragility of the domain
domains[].resiliencenumber (0–1)Domain capacity to absorb and recover from shocks
sheafH1numberCount of H^1 obstruction cycles — inconsistent cross-domain signals
globalConsistencynumber (0–1)1 minus normalized H^1; higher = more globally consistent
cmlSynchronizationnumber (0–1)CML synchronization index; low value = desynchronized cascade risk
cascadeEventsnumberCount of cascade events in the CML simulation
betti0numberBetti-0: independent risk cluster count at final filtration scale
betti1numberBetti-1: circular dependency count at final filtration scale
attractorsnumberConley index attractor count (stable equilibria)
repellersnumberConley index repeller count (unstable equilibria — fragility signal)
shapleyDominantstringDomain with highest Shapley fragility contribution
leontiefMultipliernumberSystem-wide amplification factor from Leontief I-O
nashEquilibriumbooleanWhether the mean-field game has reached Nash equilibrium
moranDominantstringDominant institutional strategy from Moran evolutionary process
lyapunovExponentsRecord<string,number>Per-domain CML Lyapunov exponents; positive = chaotic
tippingProximityRecord<string,number>Per-domain tipping point proximity score (detect tool)
optimalPolicy[]object[]Dec-POMDP recommended action per domain with cost/benefit
valueOfInformationRecord<string,number>Per-domain VOI guiding monitoring investment allocation
domainContributionsRecord<string,number>Shapley value per domain
interactionIndicesRecord<string,Record<string,number>>Pairwise domain synergy matrix
fixationProbabilitiesRecord<string,number>Moran fixation probability per institutional strategy
stationaryDistributionnumber[]Long-run institutional strategy distribution
leontiefBottlenecksstring[]Domains identified as resource flow bottlenecks
spatialCorrelationnumber[][]GP Matern 5/2 spatial correlation matrix
gpHyperparametersobjectGP length scale, signal variance, noise variance

How much does it cost to run civilizational fragility assessments?

This MCP server uses pay-per-event pricing — you pay per tool call. Each tool call fetches data from all 17 actors in parallel; the cost reflects the underlying actor compute costs plus the MCP server's own platform costs.

ScenarioTool callsCost per callTotal cost
Quick test (detect_tipping_proximity)1~$0.04~$0.04
Weekly monitoring run4~$0.04~$0.16
Full fragility assessment (assess_cascading_fragility)1~$0.04~$0.04
Monthly research workflow20~$0.04~$0.80
Institutional daily monitoring90~$0.04~$3.60

Note: the per-event charge above covers the MCP server event fee. The underlying 17 actor calls each consume Apify platform credits separately — budget $5–30 per full assess_cascading_fragility run depending on data volumes returned. You can set a maximum spending limit per run to cap total costs. The actor stops when your budget is reached.

The Apify Free plan includes $5 of monthly platform credits — enough to run several full assessments at no cost before any charges apply. Compare this to institutional risk platforms at $15,000–$50,000/year for static, non-queryable reports.

Using the API

The MCP server runs in Apify Standby mode and is accessible via its public actor URL. You can also trigger it programmatically using the Apify API.

Python

from apify_client import ApifyClient

client = ApifyClient("YOUR_API_TOKEN")

# Start the MCP server actor in standby mode and query it via MCP
# Or call it as a regular actor run for health checks
run = client.actor("ryanclinton/civilizational-fragility-mcp").call(run_input={})

print(f"Server status: {run['status']}")
print(f"MCP endpoint: https://civilizational-fragility-mcp.apify.actor/mcp")

# For direct MCP tool calls, use an MCP client library pointed at the endpoint:
# url = "https://civilizational-fragility-mcp.apify.actor/mcp?token=YOUR_API_TOKEN"

JavaScript

import { ApifyClient } from "apify-client";

const client = new ApifyClient({ token: "YOUR_API_TOKEN" });

// Health check / actor start
const run = await client.actor("ryanclinton/civilizational-fragility-mcp").call({});

console.log(`Server status: ${run.status}`);
console.log(`MCP endpoint: https://civilizational-fragility-mcp.apify.actor/mcp`);

// For MCP tool calls, point any MCP client at:
// https://civilizational-fragility-mcp.apify.actor/mcp?token=YOUR_API_TOKEN

cURL — direct MCP tool call

# Call the assess_cascading_fragility tool directly via HTTP POST to the MCP endpoint
curl -X POST "https://civilizational-fragility-mcp.apify.actor/mcp?token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
      "name": "assess_cascading_fragility",
      "arguments": {
        "query": "climate economic cascading risk 2026",
        "cml_time_steps": 200,
        "cml_logistic_r": 3.8,
        "planning_horizon": 5
      }
    }
  }'

# Fetch available tools list
curl -X POST "https://civilizational-fragility-mcp.apify.actor/mcp?token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'

How this MCP server works

Phase 1: Parallel data collection across 17 sources

When any tool is called, fetchAllDomainData() fires 17 runActor() calls concurrently via runActorsParallel(). FRED receives a fixed query for "GDP inflation unemployment debt" with up to 30 results; BLS receives a CPI/unemployment query; all other 15 actors receive the caller-provided query string. Each actor call uses a 180-second timeout and 256 MB memory allocation. Results are returned as raw item arrays.

Phase 2: Domain node construction

buildDomainNodes() in scoring.ts maps the 17 raw result arrays to 6 DomainNode objects (economic, climate, health, cyber, governance, environment) plus a set of CouplingEdge objects. Each node receives a stressLevel, fragility, and resilience score derived from the raw indicators using extractNumber() with domain-specific key mappings. Coupling strengths (epsilon values for the CML) are computed from cross-domain data volume ratios. The seeded PRNG (mulberry32) ensures reproducible coupling weights from the same data.

Phase 3: Algorithm pipeline

Algorithms run sequentially on the node/edge graph:

  1. Sheaf cohomology — builds a simplicial complex from domain nodes and edges, computes coboundary operators, and counts H^0 and H^1 via rank-nullity on the coboundary matrix. The obstruction map records per-edge inconsistency values. globalConsistency = 1 - h1 / totalEdges.
  2. Coupled map lattice — iterates x_i(t+1) = (1-eps)*f(x_i(t)) + (eps/k)*sum_j(f(x_j(t))) for time_steps iterations with f(x) = r*x*(1-x) (logistic map). Lyapunov exponents are estimated from the log-average of absolute Jacobian values. Cascade events are recorded when any node state crosses 0.85.
  3. Persistent homology — constructs a Vietoris-Rips filtration on the 3D feature space (stress, fragility, resilience) using pairwise Euclidean distances as filtration values. Betti numbers are tracked at each filtration threshold. totalPersistence sums interval lifetimes; stabilityScore uses Wasserstein stability.
  4. Conley index — builds isolating neighborhoods around each node's finalState from the CML, computes the connection matrix encoding flow between Morse sets, and classifies each set as attractor (index 0) or repeller (index > 0).
  5. Dec-POMDP — runs belief-space value iteration for planning_horizon steps across 6 domain agents each choosing from 4 actions. Transition costs and benefits are domain-specific. Value of information is computed as the difference in expected policy value between full and partial observability.
  6. Shapley values — computes exact Shapley values via the weighted marginal contribution formula over all coalitions. Pairwise interaction indices quantify synergistic domain pairs where I(i,j) = phi(i,j) - phi(i) - phi(j).
  7. Leontief I-O — builds the technical coefficient matrix A from coupling edge weights, solves x = (I-A)^-1 * d for the output vector, and computes forward/backward linkages for each domain. System multiplier is the mean of the (I-A)^-1 column sums.
  8. Mean-field game — solves the coupled HJB + Fokker-Planck PDE system for epidemic-economic coupling. The HJB value function is iterated backward; the Fokker-Planck density is iterated forward. Nash equilibrium is declared when the value function update is below 1e-4.
  9. Gaussian process — fits a GP with Matern 5/2 kernel (k(r) = (1 + sqrt(5)*r/l + 5r^2/(3l^2)) * exp(-sqrt(5)*r/l)) to the domain stress observations. Hyperparameters (length scale, signal variance, noise variance) are optimized by maximizing log marginal likelihood via gradient descent on the kernel matrix.
  10. Moran process — initializes a population of population_size institutional actors distributed across strategies proportional to domain resilience. At each generation, one actor is selected for reproduction proportional to fitness; one is replaced uniformly. Fixation probabilities use the exact formula rho = (1-1/r) / (1-1/r^N).

Phase 4: Report assembly

computeFragilityReport() weights contributions from all 10 algorithms into a single overallFragility score using fixed weights: sheaf H1 (15%), CML synchronization loss (15%), homology total persistence (10%), Conley repeller fraction (10%), Shapley max contribution (15%), Leontief multiplier normalization (10%), mean-field coupling (10%), GP variance (5%), Moran fixation spread (10%). The riskGrade maps 0–0.2 to A, 0.2–0.4 to B, 0.4–0.6 to C, 0.6–0.8 to D, 0.8–1.0 to F.

Tips for best results

  1. Tune the query for your domain of concern. The query string passes to all 15 variable actors simultaneously. "financial contagion sovereign debt 2026" pulls more relevant IMF, World Bank, and FRED records than the generic default, which directly improves domain node stress calibration.

  2. Use detect_tipping_proximity as a triage tool. It is cheaper than the full assessment and returns the most actionable signal — which specific domain is nearest a critical transition. Run it first, then use simulate_multiplex_cascade or plan_intervention_decpomdp on the flagged domain.

  3. Interpret Lyapunov exponents carefully. A positive exponent indicates chaotic dynamics where small perturbations grow exponentially. This does not mean collapse is imminent — it means the system is sensitive to interventions, both stabilizing and destabilizing.

  4. Cross-reference Shapley values with Dec-POMDP policy. The Shapley dominant domain is not always the highest-priority intervention target. Dec-POMDP accounts for intervention cost and diminishing returns. If the Shapley dominant domain has low value of information, monitoring investment may be better placed elsewhere.

  5. Set cml_time_steps to 500 for thorough cascade analysis. The default 200 steps is sufficient for Lyapunov estimation but longer runs reveal transient cascade chains that short runs miss, especially near the r=3.57 bifurcation boundary.

  6. Combine track_persistent_homology with simulate_multiplex_cascade. Persistent features (death − birth > 0.2) in the homology output identify structural vulnerabilities. Feed the domain IDs from those features as focal points in the cascade simulation query to stress-test the topology.

  7. Use forecast_civilizational_trajectory for scenario planning. Run it with a population_size of 1000 for a fast directional read, then 10000 for publication-grade fixation probability estimates.

  8. Monitor value of information (VOI) across planning cycles. The Dec-POMDP VOI output changes as domain stress levels shift. Domains with high VOI should receive increased monitoring investment; low VOI domains are well-understood and do not warrant expensive data collection.

Combine with other Apify actors

ActorHow to combine
Company Deep ResearchEnrich fragility output with company-level exposure reports for the dominant risk domain identified by Shapley values
WHOIS Domain LookupCross-reference cyber domain stress signals with infrastructure ownership data for attribution analysis
SEC EDGAR Filing AnalyzerMap economic domain stress signals to specific public company risk disclosures for portfolio-level impact assessment
Trustpilot Review AnalyzerLayer sentiment signals from consumer-facing businesses onto economic domain stress for ground-truth calibration
Website Change MonitorTrack changes to CISA, FEMA, and WHO pages as leading indicators that update before the underlying data sources
B2B Lead QualifierIdentify companies operating in high-stress domains flagged by the fragility assessment for targeted outreach
Competitor Analysis ReportCombine governance domain stress with competitive intelligence to understand regulatory disruption risk by sector

Limitations

  • Data latency varies by source — FRED, BLS, and IMF data may lag real-world conditions by 1–4 weeks depending on publication schedules. The server queries current data but cannot backfill unreleased series.
  • Domain node construction is indicator-based — stress scores are computed from available API fields (unemployment rates, CVE counts, disaster declarations) and do not incorporate classified intelligence, proprietary data, or qualitative expert judgment.
  • Dec-POMDP is computationally approximate — exact Dec-POMDP is NEXP-complete. The implementation uses finite-horizon value iteration with discretized belief states, which is a tractable approximation that may miss optimal policies in high-uncertainty regimes.
  • Sheaf cohomology obstructions indicate inconsistency, not collapse — high H^1 means local domain risk assessments cannot be reconciled globally. This is a structural signal, not a deterministic prediction of failure. Many high-fragility periods do not produce collapse events.
  • CML dynamics are sensitive to logistic r near the bifurcation boundary — small changes in r near 3.57 produce qualitatively different behavior. Results should be tested across a range of r values before being used for policy decisions.
  • Moran process assumes fitness proportional selection — real institutional change is path-dependent, politically constrained, and influenced by coordination mechanisms not captured by the frequency-dependent selection model.
  • The server queries up to 20–30 results per actor — for global macro questions, this sampling may undersample some data domains. Increase maxResults via the query string for broader coverage.
  • Not a real-time monitoring service — each tool call fetches fresh data at call time. The server does not maintain continuous monitoring streams. For continuous monitoring, schedule regular tool calls via Apify's scheduling system.

Integrations

  • Zapier — trigger a civilizational fragility assessment on a schedule and push risk grades to Slack, email, or any Zapier-connected app
  • Make — build automated scenario monitoring pipelines that call detect_tipping_proximity daily and escalate when any domain tipping score exceeds a threshold
  • Google Sheets — append fragility scores and domain stress levels to a time-series sheet for trend tracking and charting
  • Apify API — call the MCP endpoint programmatically from any language for integration into research pipelines, dashboards, or agent frameworks
  • Webhooks — configure webhooks to notify downstream systems when a run completes or when the server encounters errors
  • LangChain / LlamaIndex — connect this MCP server to LangChain agents or LlamaIndex pipelines for grounded civilizational risk reasoning with live data

Troubleshooting

  • Tool call returns empty or near-zero domain values — one or more upstream data actors may have returned no results for the query. Check that the query string matches the domain of interest and that the Apify platform is not experiencing an outage. Try a broader query like "global risk" to confirm data is flowing.

  • Run timeout before all 17 actors complete — the default actor timeout is 3 minutes per actor call. For high-latency periods on shared infrastructure, some actor calls may return empty arrays rather than timing out the whole run. The server handles this gracefully by building domain nodes from partial data. Retry the call if critical domains show zero stress scores.

  • CML output shows all domains synchronized (synchronizationIndex near 1.0) — this can occur when domain stress levels are all very low (below 0.2) or when data collection returns minimal indicator variance. It does not indicate a healthy system — it may indicate insufficient data. Use a more specific query to improve data density.

  • Dec-POMDP optimal policy is all "do_nothing" — this occurs when all domain fragility scores are below 0.3 and no domain has high enough stress to trigger monitoring or mitigation thresholds. The output is technically correct but not actionable. Run with a query that produces higher-stress data or increase the planning horizon.

  • Spending limit error in tool response — you have reached the maximum spend configured for the actor run. Increase the limit in Apify Console under the actor's run settings, or set a higher maxTotalChargeUsd in your run configuration.

Responsible use

  • This server queries only publicly available data from official government and intergovernmental sources (FRED, BLS, IMF, World Bank, NOAA, FEMA, USGS, WHO, CISA, NVD, ClinicalTrials.gov, Congress.gov, Federal Register, OFAC).
  • Risk assessments produced by this server are model outputs derived from quantitative indicators. They should not be used as the sole basis for policy decisions, investment actions, or national security assessments.
  • Mathematical fragility scores reflect statistical patterns in public data and do not constitute predictions of specific events or collapse scenarios.
  • Do not use outputs to support disinformation, market manipulation, or any activity that could cause harm through misrepresentation of risk.
  • For guidance on responsible use of automated risk assessment tools, consult your organization's data governance and AI ethics policies.

FAQ

How many data sources does the civilizational fragility MCP server query per tool call? All 8 tools query all 17 data sources in parallel on every call — FRED, BLS, IMF, World Bank, OECD, NOAA, FEMA, USGS, GDACS, OpenAQ, NVD, CISA KEV, WHO GHO, ClinicalTrials, Congress Bills, Federal Register, and OFAC. There is no partial-source mode, as the cross-domain coupling algorithms require all 6 domain nodes to be populated.

How long does a civilizational fragility assessment take to complete? Data collection across 17 parallel actor calls takes approximately 2–4 minutes depending on query specificity and platform load. Algorithm computation (10 frameworks) adds under 5 seconds. Total wall-clock time from tool call to structured result is typically 3–6 minutes.

What does the risk grade mean and how is it calculated? The risk grade (A through F) maps directly to the overallFragility score: A = 0–0.2, B = 0.2–0.4, C = 0.4–0.6, D = 0.6–0.8, F = 0.8–1.0. The score is a weighted combination of 10 algorithm outputs: sheaf H^1 obstruction (15%), CML desynchronization (15%), Shapley maximum contribution (15%), persistent homology total persistence (10%), Conley repeller fraction (10%), Leontief multiplier normalization (10%), mean-field coupling strength (10%), Moran fixation spread (10%), and GP variance (5%).

How is this different from existing risk rating services like Verisk, Moody's, or RMS? Traditional risk services produce static, proprietary reports on fixed publication schedules. This server produces live, queryable, transparent algorithmic assessments on demand. Every scoring formula is implemented in open TypeScript code. The data sources are all public. The mathematical frameworks (sheaf cohomology, CML, persistent homology) operate on cross-domain coupling structure rather than single-domain metrics. No equivalent open, queryable, multi-domain fragility tool exists at this price point.

Is it legal to use this data for risk analysis and research? Yes. All 17 data sources are operated by US government agencies (FRED/Fed, BLS, FEMA, USGS, NVD/NIST, CISA, WHO, ClinicalTrials.gov, Congress.gov, Federal Register, OFAC) or intergovernmental organizations (IMF, World Bank, OECD, GDACS) that publish data specifically for public research and analysis use. OpenAQ is an open-source air quality platform. Use of the data is subject to each source's terms of service, all of which permit non-commercial and commercial research use.

What does a positive Lyapunov exponent mean in practice? A positive Lyapunov exponent from the coupled map lattice simulation means that domain is in a chaotic dynamical regime where nearby initial conditions diverge exponentially. In practical terms: small policy changes or external shocks will have unpredictable, outsized effects. This is a signal that interventions should be cautious and reversible, and that monitoring frequency should increase.

Can I use this MCP server with any AI agent framework? Yes. The server implements the Model Context Protocol (MCP) over HTTP using the @modelcontextprotocol/sdk StreamableHTTP transport. Any MCP-compatible client works: Claude Desktop, Cursor, Cline, Windsurf, VS Code with MCP extensions, custom agents using the MCP Python or JavaScript SDK, or any HTTP client that constructs valid JSON-RPC 2.0 requests.

How accurate are the domain stress scores? Domain stress scores are derived from quantitative public indicators: unemployment rates, GDP growth, CVE counts, disaster declarations, disease incidence rates, air quality indices, and legislative activity. They capture what is measurable in public data. They do not incorporate intelligence assessments, classified data, or expert qualitative judgment. Treat them as quantitative baselines to be supplemented with domain expertise, not as standalone ground truth.

Can I schedule this MCP server to run assessments automatically? Yes. Use Apify's built-in scheduling system to trigger the actor on any cron schedule — daily, weekly, or custom. You can also configure webhooks to push results to your systems automatically after each run, or use the Zapier or Make integrations to route outputs to Slack, email, or a database.

What happens if one of the 17 upstream actors fails or returns no data? The runActor() function in actor-client.ts catches all errors and returns an empty array rather than throwing. buildDomainNodes() handles missing data by constructing domain nodes with lower data-point counts and adjusted confidence weights. The assessment completes with partial data; domains whose source actors returned empty arrays will have lower stress scores and higher uncertainty. The dataPoints field on each domain node indicates how much data fed into that domain's calculation.

How does the Shapley dominant domain result guide intervention priority? The Shapley dominant domain has the highest marginal contribution to total system fragility. Removing or stabilizing that domain reduces overall fragility more than any other single intervention. However, the Dec-POMDP plan_intervention_decpomdp tool factors in intervention cost and value of information — sometimes a domain with a lower Shapley value is the higher-priority target because it is cheaper to stabilize or because better monitoring data would significantly change the policy.

Help us improve

If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:

  1. Go to Account Settings > Privacy
  2. Enable Share runs with public Actor creators

This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.

Support

Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom configurations, additional data source integrations, or enterprise deployments, reach out through the Apify platform.

How it works

01

Configure

Set your parameters in the Apify Console or pass them via API.

02

Run

Click Start, trigger via API, webhook, or set up a schedule.

03

Get results

Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.

Use cases

Sales Teams

Build targeted lead lists with verified contact data.

Marketing

Research competitors and identify outreach opportunities.

Data Teams

Automate data collection pipelines with scheduled runs.

Developers

Integrate via REST API or use as an MCP tool in AI workflows.

Ready to try Civilizational Fragility MCP Server?

Start for free on Apify. No credit card required.

Open on Apify Store