Climate-Economic Nexus MCP
**Climate-economic integrated assessment** via the Model Context Protocol gives AI agents quantitative tools for scenario analysis, tipping cascade detection, carbon market forecasting, and physical risk attribution. This MCP server orchestrates 18 live environmental and economic data sources — NOAA, GDACS, World Bank, IMF, FRED, GBIF, IUCN, Eurostat, and more — then applies 8 published mathematical frameworks to produce structured, decision-grade outputs.
Maintenance Pulse
90/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| simulate-integrated-assessment | DICE/RICE optimal control simulation | $0.10 |
| detect-tipping-cascades | Thom cusp catastrophe bifurcation detection | $0.10 |
| quantify-damage-uncertainty | Bayesian hierarchical damage function | $0.08 |
| optimize-robust-adaptation | PRIM bump-hunting with MOEA optimization | $0.08 |
| downscale-spatial-impacts | Gaussian process Matern kernel spatial regression | $0.06 |
| forecast-carbon-price-regimes | Regime-switching jump-diffusion Hamilton filter | $0.08 |
| assess-biodiversity-economic-loss | Species-area percolation on habitat lattice | $0.06 |
| attribute-climate-damages | Optimal fingerprinting total least squares | $0.08 |
Example: 100 events = $10.00 · 1,000 events = $100.00
Connect to your AI agent
Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.
https://ryanclinton--climate-economic-nexus-mcp.apify.actor/mcp{
"mcpServers": {
"climate-economic-nexus-mcp": {
"url": "https://ryanclinton--climate-economic-nexus-mcp.apify.actor/mcp"
}
}
}Documentation
Climate-economic integrated assessment via the Model Context Protocol gives AI agents quantitative tools for scenario analysis, tipping cascade detection, carbon market forecasting, and physical risk attribution. This MCP server orchestrates 18 live environmental and economic data sources — NOAA, GDACS, World Bank, IMF, FRED, GBIF, IUCN, Eurostat, and more — then applies 8 published mathematical frameworks to produce structured, decision-grade outputs.
The server runs persistently on the Apify platform in Standby mode. Any MCP-compatible client — Claude Desktop, Cursor, Windsurf, Cline — connects via a single endpoint URL with no infrastructure to manage. All 8 tools follow pay-per-event pricing: you pay only for the analysis you actually run.
⬇️ What data can you access?
| Data Point | Source | Coverage |
|---|---|---|
| 🌡️ Weather patterns and climate alerts | NOAA Weather Search | US and global |
| 🌊 Sea level and seismic activity | USGS Earthquake Search | Global |
| 🌪️ Real-time disaster events (floods, cyclones, wildfires) | GDACS Disaster Alerts | Worldwide |
| 💨 Air quality measurements | OpenAQ Air Quality | Global monitoring stations |
| 🌧️ Flood risk data | UK Flood Warnings | UK river and coastal systems |
| 🦋 Species occurrence records (2B+ records) | GBIF Biodiversity Search | Global |
| 🔴 Threatened species assessments | IUCN Red List Search | Global species |
| 📊 Country development indicators | World Bank Indicators | 200+ countries |
| 💹 Global macroeconomic projections | IMF Economic Data | Global economies |
| 📈 US economic time series | FRED Economic Data | GDP, CPI, energy |
| 🏛️ OECD member statistics | OECD Statistics Search | OECD nations |
| 🇪🇺 EU environmental and economic stats | Eurostat Data Search | EU member states |
| 💱 Live exchange rates (150+ currencies) | Exchange Rate Tracker | Global |
| 📉 Historical exchange rate series | Exchange Rate History | Time series |
| 📍 Geographic coordinate resolution | Nominatim Geocoder | Global |
| 🏚️ US disaster declarations | FEMA Disaster Search | All US events |
| 🌍 Climate adaptation projects | World Bank Projects Search | Global |
| 🌤️ Multi-day weather forecasts | Weather Forecast Search | Global locations |
Why use Climate-Economic Nexus MCP?
Building your own climate-economic analytics pipeline means licensing data from NOAA, World Bank, IUCN, and a dozen other sources, implementing published damage functions, running Monte Carlo simulations, and maintaining all of it. That is months of engineering work — and it still doesn't give you integrated cross-source analysis.
This MCP server handles the entire data collection and computation layer. Your AI agent calls a single tool with a natural-language query, and the server fans out to all 18 data sources in parallel, calibrates the mathematical models from live data, and returns structured JSON results ready for decision support.
Platform capabilities your agent inherits automatically:
- Standby mode — the server stays warm between calls, so there is no cold-start latency on inference requests
- Parallel execution — all 18 actor calls run concurrently via
Promise.all, reducing per-tool latency to the slowest single source - Spending limits — set a maximum budget per session; all tools check the limit before charging
- API access — connect from any MCP client or trigger analysis directly from Python, JavaScript, or cURL
- Monitoring — Apify platform logging captures every data source call, result count, and error for debugging
- Integrations — output can be piped to Zapier, Make, Google Sheets, webhooks, or HubSpot via Apify integrations
⬆️ MCP Tools
| Tool | Price | Description |
|---|---|---|
simulate_integrated_assessment | $0.035 | DICE optimal control: three-reservoir carbon cycle, two-box energy balance, discounted utility maximization. Returns SCC, peak warming, year of 2°C, carbon budget, optimal carbon tax path. Queries 18 actors. |
detect_tipping_cascades | $0.030 | Cusp catastrophe bifurcation analysis of AMOC, Amazon, ice sheets, coral reefs, and permafrost as coupled SDEs with Heaviside coupling. Returns cascade probability, tipped count, system risk, element-level bifurcation states. |
quantify_damage_uncertainty | $0.030 | Bayesian hierarchical damage model D(T) = a1·T + a2·T² with Gibbs-like posterior updates. Returns regional damage estimates with 95th/99th percentile tail risk, model disagreement score. |
optimize_robust_adaptation | $0.040 | PRIM bump-hunting in scenario space + MOEA Pareto optimization. Returns adaptation strategies on cost-robustness-regret Pareto front, vulnerable scenario boxes, strategy rankings. |
downscale_spatial_impacts | $0.035 | Gaussian process spatial downscaling with Matern-3/2 kernel. Maps temperature anomalies, precipitation, sea level, and damage intensity at configurable grid resolution. |
forecast_carbon_price_regimes | $0.030 | Regime-switching jump-diffusion with Hamilton filter. Identifies low-stable, policy-transition, and high-volatile carbon price regimes. Returns transition matrix, forecast 90% CI, jump probability. |
assess_biodiversity_economic_loss | $0.030 | Species-area relationship S = cA^z with percolation theory on habitat lattice. Quantifies ecosystem service economic losses per region and identifies habitat connectivity collapse thresholds. |
attribute_climate_damages | $0.030 | Optimal fingerprinting via Total Least Squares. Decomposes observed damages into anthropogenic and natural variability components. Returns attribution fraction, detection statistic, signal betas. |
Use cases for climate-economic integrated assessment
Climate risk financial disclosure
Chief Risk Officers and sustainability teams building TCFD-aligned disclosures need quantified scenario analysis under RCP 2.6, 4.5, and 8.5 pathways. Use simulate_integrated_assessment for physical risk trajectories and downscale_spatial_impacts to map temperature and damage exposure at asset-level granularity. quantify_damage_uncertainty provides the confidence intervals required for robust disclosure narratives.
Carbon market strategy and ETS trading
Carbon trading desks and corporate sustainability teams managing emissions obligations need to anticipate price regime transitions before they happen. forecast_carbon_price_regimes applies regime-switching jump-diffusion with Hamilton filter smoothing to identify when the EU ETS or other markets are approaching a policy-transition regime, including the 90% confidence interval for price in each regime.
Insurance catastrophe modeling and loss-and-damage assessment
Actuaries pricing climate-related property insurance and reinsurance need to separate the anthropogenic climate signal from natural variability in historical loss data. attribute_climate_damages uses optimal fingerprinting with Total Least Squares — the same framework used in Allen and Stott (2003) — to compute what fraction of observed damages is attributable to anthropogenic forcing, supporting defensible actuarial assumptions and climate litigation analysis.
Adaptation investment planning under deep uncertainty
Infrastructure planners, government agencies, and development finance institutions need adaptation portfolios that perform well across a wide range of future scenarios, not just the central projection. optimize_robust_adaptation applies Patient Rule Induction Method (PRIM) bump-hunting to discover the specific scenario combinations where each strategy fails, and MOEA multi-objective optimization to produce a Pareto front balancing cost, damage reduction, and robustness.
Tipping point early warning and systemic risk monitoring
Risk analysts tracking interconnected earth system elements need quantitative proximity-to-bifurcation metrics, not qualitative descriptions. detect_tipping_cascades parameterizes AMOC, Amazon dieback, West Antarctic Ice Sheet, coral reefs, and permafrost thaw as coupled cusp catastrophe systems and runs 5,000 Euler-Maruyama Monte Carlo paths to estimate cascade probability and which tipping chains are most likely.
Biodiversity and natural capital accounting
Investment managers, regulatory bodies, and corporates reporting under TNFD frameworks need to quantify the economic value of ecosystem services at risk from habitat loss. assess_biodiversity_economic_loss combines GBIF occurrence data, IUCN threat assessments, and World Bank country indicators to compute species-area loss estimates and dollar-equivalent ecosystem service losses per biome, using percolation theory to identify connectivity collapse thresholds.
How to connect this MCP server
This server runs in Apify Standby mode and exposes a persistent /mcp endpoint. No deployment or configuration is needed beyond adding it to your MCP client.
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"climate-economic-nexus": {
"url": "https://climate-economic-nexus-mcp.apify.actor/mcp",
"headers": {
"Authorization": "Bearer YOUR_APIFY_TOKEN"
}
}
}
}
Cursor / Windsurf / Cline
Add the endpoint https://climate-economic-nexus-mcp.apify.actor/mcp in your MCP server settings with your Apify API token as the Bearer authorization header. The server is immediately available — no spin-up wait.
Direct HTTP (cURL)
# Call simulate_integrated_assessment directly
curl -X POST "https://climate-economic-nexus-mcp.apify.actor/mcp" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "simulate_integrated_assessment",
"arguments": {
"query": "global warming 2050 economic impact",
"horizon_years": 80,
"carbon_tax_init": 75
}
},
"id": 1
}'
Python
import httpx
import json
APIFY_TOKEN = "YOUR_APIFY_TOKEN"
MCP_URL = "https://climate-economic-nexus-mcp.apify.actor/mcp"
payload = {
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "simulate_integrated_assessment",
"arguments": {
"query": "Southeast Asia flood risk economic damage 2050",
"horizon_years": 100,
"carbon_tax_init": 50
}
},
"id": 1
}
response = httpx.post(
MCP_URL,
json=payload,
headers={"Authorization": f"Bearer {APIFY_TOKEN}"}
)
result = response.json()
content = json.loads(result["result"]["content"][0]["text"])
print(f"Social Cost of Carbon: ${content['socialCostOfCarbon']:.2f}/tCO2")
print(f"Peak Warming: {content['peakWarming']:.2f}°C")
print(f"Year of 2°C: {content['yearOf2C']}")
print(f"Carbon Budget Remaining: {content['carbonBudgetRemaining']:.0f} GtCO2")
JavaScript
const APIFY_TOKEN = "YOUR_APIFY_TOKEN";
const MCP_URL = "https://climate-economic-nexus-mcp.apify.actor/mcp";
const response = await fetch(MCP_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${APIFY_TOKEN}`
},
body: JSON.stringify({
jsonrpc: "2.0",
method: "tools/call",
params: {
name: "detect_tipping_cascades",
arguments: {
query: "AMOC collapse ice sheet tipping point",
n_paths: 5000,
horizon_years: 100
}
},
id: 1
})
});
const result = await response.json();
const data = JSON.parse(result.result.content[0].text);
console.log(`System Risk: ${(data.systemRisk * 100).toFixed(1)}%`);
console.log(`Cascade Probability: ${(data.cascadeProbability * 100).toFixed(1)}%`);
console.log(`Elements Near Bifurcation: ${data.tippedCount}`);
for (const element of data.elements) {
console.log(` ${element.name}: ${element.bifurcationType} (discriminant: ${element.discriminant.toFixed(3)})`);
}
Output examples
simulate_integrated_assessment
{
"socialCostOfCarbon": 187.42,
"peakWarming": 2.81,
"yearOf2C": 2061,
"carbonBudgetRemaining": 312.7,
"discountedUtility": 4821.6,
"optimalCarbonTaxPath": [
50.0, 51.5, 53.1, 54.7, 56.3, 58.0, 59.7, 61.5, 63.4, 65.3
],
"trajectorySnapshots": [
{
"year": 2025, "temperature": 1.10, "temperatureOcean": 0.33,
"carbonAtm": 851.0, "carbonUpper": 460.0, "carbonDeep": 1740.0,
"output": 1.08e14, "damages": 0.00287, "abatement": 0.00914,
"netOutput": 1.06e14, "carbonTax": 50.0, "emissions": 36.2
},
{
"year": 2035, "temperature": 1.42, "temperatureOcean": 0.46,
"carbonAtm": 906.3, "carbonUpper": 471.2, "carbonDeep": 1742.1,
"output": 1.23e14, "damages": 0.00478, "abatement": 0.01123,
"netOutput": 1.20e14, "carbonTax": 67.2, "emissions": 31.8
}
],
"totalTimesteps": 100
}
detect_tipping_cascades
{
"systemRisk": 0.347,
"tippedCount": 2,
"cascadeProbability": 0.183,
"elements": [
{
"name": "AMOC",
"state": -0.412,
"controlA": -1.823,
"controlB": 0.091,
"discriminant": -21.6,
"nearBifurcation": true,
"bifurcationType": "fold",
"criticalDistance": 0.048
},
{
"name": "Amazon",
"state": 0.218,
"controlA": -0.724,
"controlB": 0.034,
"discriminant": -0.34,
"nearBifurcation": false,
"bifurcationType": "stable",
"criticalDistance": 0.312
}
],
"cascadeChains": [
{ "trigger": "AMOC", "affected": ["Amazon", "PermafrostArctic"], "severity": 0.71 }
],
"pathCount": 5000
}
forecast_carbon_price_regimes
{
"currentRegime": 1,
"regimeNames": ["low-stable", "policy-transition", "high-volatile"],
"regimeProbabilities": [0.18, 0.61, 0.21],
"forecastMean": 84.3,
"forecastStd": 22.7,
"forecast90CI": [48.2, 124.6],
"jumpProbability": 0.094,
"transitionMatrix": [
[0.92, 0.07, 0.01],
[0.05, 0.88, 0.07],
[0.03, 0.12, 0.85]
],
"filteredPrices": [52.1, 54.3, 58.8, 61.2, 67.4, 71.9, 75.0, 78.3, 81.7, 84.3]
}
Output fields
simulate_integrated_assessment
| Field | Type | Description |
|---|---|---|
socialCostOfCarbon | number | Marginal damage of one additional tonne of CO2 in $/tCO2 |
peakWarming | number | Maximum projected temperature anomaly over the horizon in °C |
yearOf2C | number|null | Projected year when 2°C warming threshold is crossed, or null if not crossed |
carbonBudgetRemaining | number | Remaining carbon budget to 2°C threshold in GtCO2 |
discountedUtility | number | Total discounted utility across the simulation horizon |
optimalCarbonTaxPath | number[] | First 10 years of optimal carbon tax trajectory in $/tCO2/year |
trajectorySnapshots | DICEState[] | Annual state snapshots at every 10th timestep (up to 15) |
trajectorySnapshots[].year | number | Calendar year |
trajectorySnapshots[].temperature | number | Atmospheric temperature anomaly in °C |
trajectorySnapshots[].carbonAtm | number | Carbon in atmosphere in GtC |
trajectorySnapshots[].damages | number | Climate damage as fraction of gross output |
trajectorySnapshots[].carbonTax | number | Carbon tax in $/tCO2 |
trajectorySnapshots[].emissions | number | Net emissions in GtCO2/year |
totalTimesteps | number | Total number of annual timesteps simulated |
detect_tipping_cascades
| Field | Type | Description |
|---|---|---|
systemRisk | number | Aggregate system risk score 0–1 |
tippedCount | number | Count of tipping elements near or past bifurcation |
cascadeProbability | number | Fraction of Monte Carlo paths with at least one cascade |
elements[] | TippingElement[] | Per-element bifurcation analysis |
elements[].name | string | Element name (AMOC, Amazon, IceSheet, CoralReefs, PermafrostArctic) |
elements[].discriminant | number | Cusp discriminant Δ = 8a³ + 27b²; negative = bistable region |
elements[].nearBifurcation | boolean | True when critical distance < 0.1 |
elements[].bifurcationType | string | "fold", "cusp", or "stable" |
elements[].criticalDistance | number | Distance from discriminant zero set |
cascadeChains[] | object[] | Chains of triggered tipping elements |
cascadeChains[].trigger | string | First element to tip |
cascadeChains[].affected | string[] | Downstream elements in cascade |
cascadeChains[].severity | number | Cascade severity 0–1 |
pathCount | number | Number of Monte Carlo paths run |
quantify_damage_uncertainty
| Field | Type | Description |
|---|---|---|
globalMeanDamage | number | Posterior mean damage as fraction of global GDP |
globalDamageStd | number | Posterior standard deviation of damage estimate |
tailRisk95 | number | 95th percentile damage across MCMC posterior samples |
tailRisk99 | number | 99th percentile damage (fat-tail risk metric) |
modelDisagreement | number | Inter-model disagreement score (coefficient of variation) |
estimates[] | DamageEstimate[] | Regional damage estimates |
estimates[].region | string | World region name |
estimates[].meanDamage | number | Regional posterior mean damage fraction |
estimates[].ci95Lower | number | Lower bound of 95% credible interval |
estimates[].ci95Upper | number | Upper bound of 95% credible interval |
optimize_robust_adaptation
| Field | Type | Description |
|---|---|---|
paretoFrontSize | number | Number of strategies on the Pareto front |
paretoFront[] | AdaptationStrategy[] | Strategies on cost-robustness-regret Pareto front |
paretoFront[].name | string | Strategy name |
paretoFront[].cost | number | Implementation cost in USD billion |
paretoFront[].damageReduction | number | Fraction of damages avoided |
paretoFront[].robustness | number | Fraction of scenarios where strategy succeeds |
paretoFront[].regretMax | number | Maximum regret across all scenarios |
primBoxes[] | PRIMBox[] | PRIM-discovered vulnerable scenario regions |
primBoxes[].coverage | number | Fraction of scenarios in the box |
primBoxes[].density | number | Fraction of vulnerable scenarios in the box |
vulnerableScenarios | number | Count of scenarios where no strategy succeeds |
downscale_spatial_impacts
| Field | Type | Description |
|---|---|---|
pointCount | number | Total grid points in the spatial field |
points[] | SpatialPoint[] | Up to 50 grid points with anomaly predictions |
points[].lat | number | Latitude |
points[].lon | number | Longitude |
points[].predictedAnomaly | number | GP posterior mean temperature anomaly in °C |
points[].uncertainty | number | GP posterior standard deviation |
meanAnomaly | number | Spatial mean anomaly across all grid points |
maxAnomaly | number | Maximum anomaly at any grid point |
kernelParams | object | Fitted Matern-3/2 kernel parameters (sigma2, lengthScale) |
logMarginalLikelihood | number | GP log marginal likelihood (model fit quality) |
hotspots[] | object[] | Top grid points with highest predicted anomaly |
forecast_carbon_price_regimes
| Field | Type | Description |
|---|---|---|
currentRegime | number | Index of most probable current regime (0, 1, or 2) |
regimeNames | string[] | ["low-stable", "policy-transition", "high-volatile"] |
regimeProbabilities | number[] | Hamilton filter posterior regime probabilities |
transitionMatrix | number[][] | 3×3 regime transition probability matrix |
forecastMean | number | Expected carbon price in $/tCO2 |
forecastStd | number | Forecast standard deviation |
forecast90CI | [number, number] | 90% confidence interval [lower, upper] |
jumpProbability | number | Probability of price jump in forecast period |
filteredPrices | number[] | Last 20 Hamilton-filtered price estimates |
assess_biodiversity_economic_loss
| Field | Type | Description |
|---|---|---|
totalEconomicLoss | number | Total ecosystem service loss in USD billion |
totalSpeciesLoss | number | Estimated total species lost |
regionsAbovePercolation | number | Count of regions above percolation threshold (connectivity collapse) |
globalConnectivity | number | Global habitat connectivity index 0–1 |
losses[] | BiodiversityLoss[] | Per-region breakdown |
losses[].region | string | Region name |
losses[].habitatArea | number | Habitat area in km² |
losses[].speciesLoss | number | Species lost via species-area relationship |
losses[].economicValue | number | Ecosystem service loss in USD |
losses[].abovePercolation | boolean | True if habitat fragmentation exceeds percolation threshold |
losses[].connectivityIndex | number | Regional connectivity from largest cluster fraction |
attribute_climate_damages
| Field | Type | Description |
|---|---|---|
attributableFraction | number | Fraction of observed damages attributable to anthropogenic forcing |
detected | boolean | True if detection statistic exceeds chi-squared threshold |
detectionStatistic | number | d = β^T · (X^T · Cn⁻¹ · X)⁻¹ · β |
residualVariance | number | Unexplained variance after attribution |
totalObserved | number | Total observed damage signal magnitude |
totalAttributed | number | Attributed damage magnitude |
signals[] | object[] | Per-signal attribution results |
signals[].name | string | Signal name (Anthropogenic, Natural, etc.) |
signals[].beta | number | Scaling factor β for this signal |
signals[].betaStd | number | Standard error of β estimate |
signals[].significant | boolean | True if signal is detectable at 95% confidence |
How much does it cost to run climate-economic analysis?
All tools use pay-per-event pricing. Each tool call triggers a single charge event — compute costs are included. There are no subscriptions or minimum commitments.
| Tool | Price per call | 10 calls | 100 calls |
|---|---|---|---|
simulate_integrated_assessment | $0.035 | $0.35 | $3.50 |
detect_tipping_cascades | $0.030 | $0.30 | $3.00 |
quantify_damage_uncertainty | $0.030 | $0.30 | $3.00 |
optimize_robust_adaptation | $0.040 | $0.40 | $4.00 |
downscale_spatial_impacts | $0.035 | $0.35 | $3.50 |
forecast_carbon_price_regimes | $0.030 | $0.30 | $3.00 |
assess_biodiversity_economic_loss | $0.030 | $0.30 | $3.00 |
attribute_climate_damages | $0.030 | $0.30 | $3.00 |
The Apify Free plan includes $5 of monthly credits — enough for approximately 140–160 tool calls per month at no cost. You can set a maximum spending limit per session in your MCP client; the server checks the limit before charging and returns a structured error if the limit is reached rather than silently failing.
Compare this to enterprise climate data platforms (Bloomberg NEF, MSCI Climate, Verisk) that charge $50,000–$200,000 per year for comparable scenario modeling capabilities.
How Climate-Economic Nexus MCP works
Phase 1: Parallel data collection from 18 actors
Every tool call triggers parallel execution of up to 18 Apify actors via Promise.all. Each actor call has a 180-second timeout and 256 MB memory allocation. The actor client in actor-client.ts handles failure gracefully — a single failing data source returns an empty array rather than aborting the entire run. Results are bundled into typed collections (climateData, economicData, disasterData, biodiversityData, geoData) before being passed to the scoring engine.
Phase 2: Data-calibrated model initialization
Each mathematical model is calibrated from live data before running. The DICE model, for example, extracts numeric temperature anomalies from NOAA and GDACS records to set baseTemp, pulls GDP figures from World Bank and IMF to set baseGdp, and reads CO2 concentration data to initialize baseEmissions. If a data source returns no usable numeric values, the model falls back to published IPCC/Nordhaus baseline constants. This means results reflect actual current conditions rather than static parameter sets.
Phase 3: Algorithm execution
Eight distinct mathematical frameworks run after calibration:
DICE Integrated Assessment implements Nordhaus's three-reservoir carbon cycle: M_ATM(t+1) = φ₁₁·M_ATM + φ₂₁·M_UP + E(t) with transfer coefficients from the 2017 DICE calibration (φ₁₁ = 0.88, φ₁₂ = 0.0472, etc.). The two-box energy balance model advances temperature using T_ATM(t+1) = T_ATM + ξ₁·(F(t) − λ·T_ATM − ξ₃·(T_ATM − T_OCEAN)) with climate feedback λ = 1.18 and forcing coefficient η = 3.68 W/m². Carbon tax ramps at 3% per year from the initial value, with abatement fraction computed from μ = min(carbonTax / backstopPrice, 1).
Cusp Catastrophe Tipping parameterizes each of the 5 tipping elements using the Thom cusp potential V(x;a,b) = x⁴/4 + ax²/2 + bx. The discriminant Δ = 8a³ + 27b² determines proximity to the bifurcation set. Monte Carlo paths use Euler-Maruyama discretization of coupled SDEs with Heaviside coupling: x_i(t+dt) = x_i(t) + f_i(x)·dt + Σⱼ αᵢⱼ·H(xⱼ − θⱼ)·dt + σᵢ·√dt·Z. n_paths defaults to 5,000.
Gaussian Process Spatial Downscaling uses a Matern-3/2 kernel k(r) = σ²·(1 + √3·r/l)·exp(−√3·r/l) with Lanczos gamma function and modified Bessel K_ν approximation for the general Matern form. GP prediction follows μ* = K*ᵀ·(K + σₙ²·I)⁻¹·y with posterior variance Σ* = K** − K*ᵀ·(K + σₙ²·I)⁻¹·K*. The Cholesky-like inversion uses a seeded Mulberry32 PRNG for reproducibility given the same input query.
Hamilton Filter for Carbon Price Regimes performs forward recursion: ξ_{t|t} = (f(y_t|s_t) ⊙ ξ_{t|t−1}) / (1ᵀ·f(y_t|s_t) ⊙ ξ_{t|t−1}) across 3 regimes with jump-diffusion dynamics superimposed on regime-specific drift and volatility parameters.
Phase 4: Structured JSON response
Results are serialized to JSON and returned as MCP text content items. Large arrays are trimmed before returning (e.g., trajectory snapshots at every 10th step, spatial points capped at 50, filtered prices at last 20) to keep response sizes within MCP client limits while preserving analytical completeness.
Tips for best results
-
Include geographic specificity in queries. "coastal Bangladesh flood risk" produces better-calibrated results from NOAA, FEMA, and Nominatim than "flood risk". Nominatim resolves place names to coordinates that anchor the spatial downscaling grid.
-
Start with
simulate_integrated_assessmentfor new topics. It calls all 18 data sources and returns the broadest cross-section of climate-economic context. Use the SCC and temperature trajectory outputs to frame more specific follow-up queries to other tools. -
Use
quantify_damage_uncertaintybefore citing numbers in reports. The tail risk at the 95th and 99th percentile is often 3–5× the mean estimate. ThemodelDisagreementfield tells you how much inter-regional variance exists — high disagreement means regional breakdowns matter more than global averages. -
Pair
detect_tipping_cascadeswithattribute_climate_damages. Cascades identify which elements are near bifurcation; attribution tells you what fraction of the driving forcing is anthropogenic. Together they answer both "how close are we?" and "how much is our fault?". -
Set
horizon_yearsto 50 for near-term planning and 150 for long-term strategy. The DICE model runs annual timesteps, so 300 years (the maximum) adds computation time. For infrastructure with 30-year lifespans, 50-year horizons with a higher initial carbon tax are most informative. -
Use
forecast_carbon_price_regimesquarterly. Carbon markets are regime-dependent and transition probabilities change as new policy signals emerge. Running the tool with updated queries like "EU ETS reform 2025" or "California AB 32 auction" helps detect emerging regime shifts before they price in. -
Chain
downscale_spatial_impactsinto asset-level risk. Pass thehotspotsarray (grid points with the highest predicted anomaly) to a geocoder or GIS system to identify which physical assets fall within high-anomaly zones.
Combine with other Apify actors
| Actor | How to combine |
|---|---|
| Company Deep Research | Run climate-economic scenario analysis first, then pass SCC and transition risk findings into company deep research to assess issuer-level climate exposure |
| SEC EDGAR Filing Analyzer | Cross-reference climate damage uncertainty outputs against corporate 10-K climate risk disclosures for materiality gap analysis |
| ESG Risk Assessment | Feed attribution fractions and tipping cascade probabilities into ESG scoring as quantitative physical risk inputs |
| Competitor Analysis Report | Use carbon price regime forecasts to benchmark competitors' emissions cost exposure across different ETS regimes |
| Federal Contract Intelligence | Identify government climate adaptation contracts relevant to the PRIM-discovered vulnerable scenario regions |
| Regulatory Change Tracker | Monitor carbon pricing policy changes that would trigger regime transitions identified by forecast_carbon_price_regimes |
| Stock Intelligence Report | Map social cost of carbon estimates onto sector earnings exposure for climate-adjusted equity valuation |
Limitations
- Models are simplified implementations, not IPCC-grade simulators. DICE, for example, uses annual timesteps and linear carbon tax optimization rather than full non-linear dynamic programming. Results are appropriate for scenario analysis and directional risk assessment, not regulatory submission without expert review.
- Data calibration depends on query quality. If a query returns no numeric values from a data source (e.g., NOAA returns no temperature records for an abstract query), that source falls back to baseline constants. Specific, geographically grounded queries produce better-calibrated results.
- All 18 actor calls run on each invocation. There is no caching between calls. Running the same query twice incurs the full per-event charge twice.
- Spatial downscaling grid resolution is limited to 1–10 degrees. Sub-degree or asset-level resolution is not available within this tool. For finer spatial analysis, export the hotspot coordinates and run them through a dedicated geospatial service.
- Carbon price forecasting covers near-to-medium term regime dynamics. Long-run structural carbon price modeling (decades-out absolute price levels) is not the intended use case; the regime-switching model is calibrated to identify transition probabilities over 1–5 year horizons.
- Attribution analysis applies to aggregate damage signals. The optimal fingerprinting implementation cannot attribute individual events in isolation — it operates on the statistical ensemble of disaster records returned by GDACS and FEMA. For single-event attribution, domain-expert counterfactual analysis remains necessary.
- Biodiversity percolation uses a 50×50 default lattice. Increasing
lattice_sizeto 100 gives finer spatial resolution at the cost of O(n²) compute time. Lattice values above 100 are not recommended. - Exchange rate normalization is USD-centric. Damage estimates are normalized to USD using live exchange rates. For multi-currency portfolio analysis, interpret regional damage fractions (percentage of GDP) rather than absolute dollar figures.
❓ FAQ
How many tool calls can I make per month on the free Apify plan? The free plan includes $5 of monthly credits. At $0.030–$0.040 per call, that is approximately 125–165 tool calls per month. Paid plans start at $49/month with significantly higher credit allowances.
Does Climate-Economic Nexus MCP use real climate models? The mathematical frameworks implement published peer-reviewed methodologies: the Nordhaus DICE integrated assessment model, Thom cusp catastrophe theory for tipping element analysis, Allen and Stott optimal fingerprinting for attribution, and Lempert PRIM for robust decision-making. They run on real data from 18 authoritative sources. Results are scenario analyses, not deterministic predictions, and should be interpreted by domain-qualified professionals for high-stakes decisions.
How accurate are the temperature and damage projections?
Projections reflect calibrated model runs, not ensemble IPCC outputs. Uncertainty is explicitly quantified: quantify_damage_uncertainty returns 95th and 99th percentile tail risks; detect_tipping_cascades returns cascade probability distributions across 5,000 Monte Carlo paths; downscale_spatial_impacts returns posterior standard deviation at every grid point. Use the uncertainty bands, not just the point estimates.
What time horizons does the server cover?
simulate_integrated_assessment supports up to 300-year horizons (default 100 years, annual timesteps from 2025). forecast_carbon_price_regimes covers near-to-medium term regime transitions (1–5 years). downscale_spatial_impacts produces a current-state snapshot rather than a forward projection. detect_tipping_cascades runs over the specified horizon_years (default 100 years).
Can I use this for TCFD, EU Taxonomy, or other regulatory climate disclosures? The tools produce scenario analysis outputs consistent with TCFD physical risk frameworks and support qualitative scenario narratives required by the EU Taxonomy. Compliance filings require review by qualified climate risk professionals. The server does not produce regulatory filings — it produces the quantitative inputs that support them.
How is this different from Bloomberg NEF or MSCI Climate? Bloomberg NEF and MSCI Climate are enterprise platforms with human analyst teams, proprietary models, and annual licenses starting at $50,000+. This MCP server is a self-service API that implements published open-source methodologies on top of publicly available data. It is suitable for teams building climate analytics into their own tools, AI workflows, or research pipelines without enterprise licensing costs.
Is it legal to use the underlying data sources? All 18 data sources — NOAA, USGS, GDACS, OpenAQ, World Bank, IMF, FRED, OECD, Eurostat, GBIF, IUCN, FEMA, and others — provide publicly available data under open or government open data licenses. Use of this server does not involve scraping private or paywalled content. See Apify's guide on web scraping legality for general guidance.
How long does each tool call take?
Tool call duration depends on data source response times. The server fans out all actor calls in parallel, so wall-clock time is approximately equal to the slowest responding data source (typically 30–120 seconds). simulate_integrated_assessment calls 18 actors and is typically the slowest at 60–150 seconds. forecast_carbon_price_regimes calls fewer high-latency economic sources and typically completes in 30–90 seconds.
Can I run multiple tools in sequence in one agent conversation?
Yes. Each tool call is independent. A common pattern is: run simulate_integrated_assessment first for context, then detect_tipping_cascades for cascade risk, then quantify_damage_uncertainty for tail-risk bounds. The spending limit check on each tool prevents accidental overrun.
What happens if one of the 18 data sources is unavailable? The actor client handles failures gracefully. Each individual actor call is wrapped in a try-catch that returns an empty array on failure. The model then falls back to baseline constants for the missing data source. Results are degraded but not broken. The tool logs which actors failed so you can investigate data source availability.
Can I connect this server to a custom AI agent framework?
Yes. The /mcp endpoint implements the Model Context Protocol (MCP) over HTTP with StreamableHTTP transport. Any framework that speaks MCP — including LangChain with the MCP adapter, LlamaIndex, AutoGen, and custom agent loops — can connect to it. The server also accepts raw tools/call JSON-RPC requests, so it works with any HTTP client without an MCP library.
How is the DICE social cost of carbon calculated?
The social cost of carbon is computed as the present value of the marginal damage from one additional GtCO2 of emissions, using the Nordhaus quadratic damage function Ω(T) = 1/(1 + π₁·T + π₂·T²) with π₂ = 0.00236, discounted at ρ = 1.5% pure time preference. The three-reservoir carbon cycle uses the DICE-2016 transfer coefficients (φ₁₁ = 0.88) to track atmospheric carbon buildup.
Help us improve
If you encounter unexpected results or a tool call fails, enable run sharing so we can diagnose faster:
- Go to Account Settings > Privacy
- Enable Share runs with public Actor creators
This lets us see the run logs and data source responses when something goes wrong. Your data is only visible to the actor developer, not publicly.
Integrations
- Zapier — Trigger climate scenario analysis from Zapier workflows; pipe social cost of carbon outputs into Google Sheets or Airtable for reporting
- Make — Automate monthly carbon price regime monitoring; route regime-transition alerts to Slack or email
- Google Sheets — Export DICE trajectory snapshots and regional damage estimates directly into spreadsheets for board-level reporting
- Apify API — Trigger analysis programmatically from Python or JavaScript pipelines; integrate into existing climate risk platforms
- Webhooks — Receive notifications when long-running DICE simulations or cascade analyses complete
- LangChain / LlamaIndex — Register all 8 tools as LangChain tools or LlamaIndex query engines for climate-aware RAG pipelines
Troubleshooting
Tool returns default/baseline values despite a specific query. This usually means the query returned no numeric data from one or more data sources. Check that your query contains geographic context and relevant domain terms (e.g., "Amazon deforestation tipping 2050" rather than "tipping points"). Abstract queries like "climate" produce fewer numeric matches across NOAA, GDACS, and World Bank.
Tool call times out in your MCP client. The server fans out to 18 actors in parallel with a 180-second timeout each. Total wall-clock time can reach 2–3 minutes for tools querying all 18 sources. Configure your MCP client's tool timeout to at least 300 seconds. Claude Desktop's default timeout is 60 seconds — increase it in your client settings if possible.
Spending limit reached error on the first call. This means the eventChargeLimitReached flag fired before the tool ran. Check your Apify account balance and top up credits, or increase the run spending limit in your actor configuration. The free plan's $5 of credits is the most common cause.
cascadeProbability is 0.0 for all tipping elements. The cascade simulation uses a seeded Mulberry32 PRNG initialized from the query string hash. Very short or single-word queries may produce degenerate parameter initializations. Try a more descriptive query: "AMOC Atlantic thermohaline circulation weakening Arctic warming" produces a richer parameterization than "AMOC".
attributableFraction returns exactly 1.0. This indicates the natural variability baseline in the disaster data was near zero, making the anthropogenic fraction dominate by default. Check that your query includes a specific extreme event type (e.g., "2024 hurricane season damage attribution") so GDACS and FEMA return varied disaster records for the baseline calibration.
Responsible use
- All 18 underlying data sources provide publicly available data under open or government open data licenses.
- Model outputs are probabilistic scenario analyses. Do not present them as deterministic forecasts in regulatory, legal, or financial contexts without expert review.
- Comply with applicable data protection and financial regulations when using outputs in investment decisions or public disclosures.
- For guidance on web scraping legality, see Apify's guide.
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom integrations, enterprise deployments, or additional mathematical frameworks, reach out through the Apify platform.
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
Bulk Email Verifier
Verify email deliverability at scale. MX record validation, SMTP mailbox checks, disposable and role-based detection, catch-all flagging, and confidence scoring. No external API costs.
GitHub Repository Search
Search GitHub repositories by keyword, language, topic, stars, forks. Sort by stars, forks, or recently updated. Returns metadata, topics, license, owner info, URLs. Free API, optional token for higher limits.
Website Content to Markdown
Convert any website to clean Markdown for RAG pipelines, LLM training, and AI apps. Crawls pages, strips boilerplate, preserves headings, tables, and code blocks. GFM support.
Website Tech Stack Detector
Detect 100+ web technologies on any website. Identifies CMS, frameworks, analytics, marketing tools, chat widgets, CDNs, payment systems, hosting, and more. Batch-analyze multiple sites with version detection and confidence scoring.
Ready to try Climate-Economic Nexus MCP?
Start for free on Apify. No credit card required.
Open on Apify Store