AIDEVELOPER TOOLS

Insurance Underwriting Intelligence MCP Server

Insurance underwriting intelligence for commercial P&C teams — this MCP server delivers multi-peril risk assessment for any property location using 8 live government data sources. It produces a **Composite Peril Score (0-100)**, a four-tier risk classification (Preferred / Standard / Substandard / Decline), a premium modifier, and actionable underwriting notes — all from a single tool call.

Try on Apify Store
$0.10per event
1
Users (30d)
15
Runs (30d)
90
Actively maintained
Maintenance Pulse
$0.10
Per event

Maintenance Pulse

90/100
Last Build
Today
Last Version
1d ago
Builds (30d)
8
Issue Response
N/A

Cost Estimate

How many results do you need?

assess_location_risks
Estimated cost:$10.00

Pricing

Pay Per Event model. You only pay for what you use.

EventDescriptionPrice
assess_location_riskQuick multi-peril check — FEMA, earthquakes, weather, floods.$0.10
analyze_disaster_historyFEMA disaster declarations, frequency trends, incident types.$0.05
evaluate_seismic_exposureUSGS earthquake data, magnitude distribution, proximity analysis.$0.05
check_flood_riskUK flood warnings + FEMA flood disaster history.$0.06
measure_environmental_liabilityOpenAQ air quality, WHO guideline comparison, pollution analysis.$0.05
score_crime_proximityUK police crime data, violent/property breakdown, exposure gradient.$0.05
project_climate_trajectory5/10/25-year forward projections from historical disaster/weather trends.$0.10
generate_underwriting_briefAll 8 data sources, 4 scoring models, risk tier, premium modifier.$0.30

Example: 100 events = $10.00 · 1,000 events = $100.00

Connect to your AI agent

Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.

MCP Endpoint
https://ryanclinton--insurance-underwriting-intelligence-mcp.apify.actor/mcp
Claude Desktop Config
{
  "mcpServers": {
    "insurance-underwriting-intelligence-mcp": {
      "url": "https://ryanclinton--insurance-underwriting-intelligence-mcp.apify.actor/mcp"
    }
  }
}

Documentation

Insurance underwriting intelligence for commercial P&C teams — this MCP server delivers multi-peril risk assessment for any property location using 8 live government data sources. It produces a Composite Peril Score (0-100), a four-tier risk classification (Preferred / Standard / Substandard / Decline), a premium modifier, and actionable underwriting notes — all from a single tool call.

P&C underwriters, actuarial teams, reinsurers, and risk engineers can query any address or region and receive structured risk intelligence across natural disasters, seismic exposure, flood warnings, weather severity, environmental contamination, crime proximity, and projected climate trajectory at 5, 10, and 25-year horizons. No manual data gathering, no vendor subscriptions — pay only per assessment.

What data can you extract?

Data PointSourceExample
📋 FEMA major disaster declarationsFEMA Disaster Declarations14 DR-type declarations, flood/hurricane dominant
🌊 Active flood warnings with severityUK Environment Agency3 severe flood warnings active within 5km
🌍 Seismic event magnitude distributionUSGS Earthquake Search2× Mag 5-6 events, 8× Mag 4-5 within 100km
🌩 NOAA weather alert severityNOAA Weather Alerts2 EXTREME tornado warnings, 1 SEVERE thunderstorm
🏭 Air quality vs WHO guidelinesOpenAQ Air QualityPM2.5 at 34 µg/m³ — 2.3× WHO threshold exceeded
👮 Crime exposure by categoryUK Police Crime Data42 violent crimes, 87 property crimes in period
🏠 Property transaction contextUK Land RegistryMedian sale price £285,000 — 24 transactions
📍 Geocoded coordinatesNominatim / OpenStreetMap51.5074°N, 0.1278°W resolved from address
📊 Composite Peril Score (0-100)4-model scoring engineScore: 58 — HIGH peril tier
🏷 Risk tier classificationComposite modelSUBSTANDARD — surcharges recommended
⚙️ Premium modifierRisk-weighted formula1.79× base premium
📈 Climate trajectory projectionsHistorical trend model5yr: 64 / 10yr: 71 / 25yr: 86 (WORSENING)

Why use Insurance Underwriting Intelligence MCP Server?

Manual underwriting research for a single property submission means opening FEMA's disaster search, USGS earthquake feeds, NOAA alert dashboards, local police statistics, and air quality portals — then reconciling seven data sources into a coherent risk picture. That takes 45-90 minutes per risk. At scale, it is impractical.

This MCP automates the entire workflow. One tool call dispatches parallel requests to all 8 data sources, runs four scoring models, and returns a structured underwriting brief in under 60 seconds. Your AI assistant in Claude, Cursor, or Windsurf can call these tools mid-conversation and surface risk signals instantly.

  • Scheduling — run portfolio-wide risk rescoring annually at policy renewal using Apify's scheduler, with no manual intervention
  • API access — trigger assessments from Python, JavaScript, or any HTTP client and feed results directly into underwriting workbenches
  • Proxy rotation — Apify's built-in proxy infrastructure ensures reliable data retrieval at scale without blocks
  • Monitoring — configure Slack or email alerts when assessments complete or when spending limits are reached
  • Integrations — connect results to Google Sheets, HubSpot, Zapier, or Make for downstream workflow automation

Features

  • 8 parallel data sources — FEMA, USGS, NOAA, UK Flood Warnings, UK Police, OpenAQ, UK Land Registry, and Nominatim geocoder are queried simultaneously via Promise.allSettled, so one source failure does not block the rest
  • Composite Peril Score (0-100) — four weighted scoring models combine into a single risk number: Composite Peril 35%, Climate Trajectory 25%, Crime Exposure 20%, Environmental Contamination 20%
  • Magnitude-weighted seismic scoring — USGS events are scored by magnitude bucket: Mag 6+ events contribute 10 points each, Mag 5+ contribute 5, Mag 4+ contribute 2, capped at 25 per assessment
  • WHO-compliant air quality scoring — six pollutants (PM2.5, PM10, NO2, SO2, O3, CO) are compared to WHO guideline thresholds; exceedances are weighted by their exceedance ratio, not just presence
  • Violent vs property crime separation — UK Police data is classified into 7 violent crime types and 6 property crime types with differential underwriting weights (violent: 4pts each; property: 2pts each)
  • Climate acceleration model — FEMA disaster declarations are bucketed by year; the ratio of the most recent decade average to the prior decade average determines an acceleration factor that projects risk forward using an exponential growth model
  • 4 risk tier classifications — Preferred (0-24), Standard (25-49), Substandard (50-74), Decline (75-100) with corresponding underwriting action guidance
  • Premium modifier output — a multiplier from 0.8 (preferred discount) to 2.5 (maximum surcharge) is calculated from the composite risk score using the formula 0.8 + (score / 100) × 1.7
  • Automatic geocoding — any address or region name is converted to latitude/longitude via Nominatim before spatial queries to USGS, NOAA, and OpenAQ
  • Configurable seismic radiusevaluate_seismic_exposure accepts a radiusKm parameter (default 100km) for proximity-based earthquake analysis
  • Underwriting notes generation — the brief includes narrative decision notes flagging: senior underwriter referral triggers, dominant peril exclusions, security requirement recommendations, and environmental endorsement suggestions
  • Spending limit enforcement — every tool checks Actor.charge() before executing; if your per-run budget cap is reached, the tool returns a structured error immediately rather than silently failing

Use cases for insurance underwriting intelligence

Commercial property underwriting

P&C underwriters processing new commercial property submissions spend disproportionate time gathering basic hazard data. This MCP returns a structured risk brief in under 60 seconds via generate_underwriting_brief. Triage a full day's submissions by composite score in minutes, focusing manual review on SUBSTANDARD and DECLINE-tier properties.

Actuarial climate trajectory modeling

Actuarial teams pricing long-tail property risks need forward-looking hazard data, not just current exposure. project_climate_trajectory uses historical FEMA disaster frequency trends to project risk at 5, 10, and 25-year horizons — feeding directly into catastrophe model calibration for climate-adjusted pricing.

Reinsurance portfolio concentration analysis

Reinsurers evaluating treaty submissions need rapid portfolio-level hazard aggregation. Running assess_location_risk across a cedent's property schedule identifies geographic peril concentration and locations where multiple hazards converge — the zones most likely to drive correlated losses.

Risk engineering pre-survey prioritization

Before deploying field engineers, risk engineering teams call evaluate_seismic_exposure and check_flood_risk to identify the highest-hazard properties requiring in-person inspection. Focus field resources on locations signalling MODERATE peril or above.

Environmental liability underwriting

measure_environmental_liability provides location-specific air quality readings against WHO guidelines with per-pollutant exceedance counts. Directly informative for respiratory illness claim exposure and pollution legal liability pricing.

Flood and weather specialty lines

check_flood_risk combines UK Environment Agency active flood warnings with the historical FEMA flood disaster record, building a dual-source flood exposure picture for specialty lines pricing and exclusion decisions.

How to use insurance underwriting risk assessment

  1. Connect your MCP client — add the server URL https://insurance-underwriting-intelligence-mcp.apify.actor/mcp to Claude Desktop, Cursor, Windsurf, or any MCP-compatible AI tool. You need an Apify API token.
  2. Ask for a risk assessment — tell your AI assistant the property address or region, for example: "Run an underwriting brief for 1400 Brickell Ave, Miami, FL." The AI calls the appropriate tool automatically.
  3. Review the structured output — the server returns a JSON brief with composite score, risk tier, premium modifier, and per-peril breakdowns. The AI summarises the key signals in plain language.
  4. Download or route results — copy the JSON output into your underwriting workbench, push to Google Sheets via Apify integrations, or trigger a webhook to your policy management system.

Input parameters

This server uses no actor-level input parameters — all inputs are passed as tool arguments when calling each MCP tool. See the tool reference below.

Tool parameters

All tools except analyze_disaster_history accept optional latitude and longitude (number, auto-geocoded if omitted). The table below lists tool-specific parameters only.

ToolParameterTypeRequiredDefaultDescription
All toolslocationstringYesAddress, city, state, or region
All toolslatitudenumberNoautoDecimal latitude — skips geocoding step
All toolslongitudenumberNoautoDecimal longitude — skips geocoding step
analyze_disaster_historylocationstringYesState, county, or region for FEMA search
evaluate_seismic_exposureradiusKmnumberNo100Earthquake search radius in kilometres

Usage tips

  • Provide coordinates when you have them — skip geocoding latency by passing latitude and longitude directly alongside the location string for all spatial tools
  • Start with assess_location_risk before calling generate_underwriting_brief — the quick assessment takes one-quarter of the time and identifies whether a full brief is warranted
  • Use analyze_disaster_history for US states and counties — FEMA data is most complete at state and county level; city-level queries return fewer records for rural areas
  • Set radiusKm to 200 in seismically active zones — the default 100km radius may underrepresent fault exposure in areas like California or Turkey where faults extend far from urban centres
  • Use score_crime_proximity and check_flood_risk for UK addresses only — these tools source from UK government data; non-UK addresses will return limited or empty results

Output example

Full output from generate_underwriting_brief for a commercial property submission:

{
  "location": "1400 Brickell Avenue, Miami, FL",
  "coordinates": { "lat": 25.7589, "lon": -80.1946 },
  "compositeRiskScore": 61,
  "riskTier": "SUBSTANDARD",
  "premiumModifier": 1.84,
  "allSignals": [
    "12 FEMA major disaster declarations in area",
    "Significant seismic activity — 6 events recorded",
    "4 active weather alerts, including severe/extreme",
    "2 severe flood warnings active",
    "3 pollutants exceed WHO guidelines",
    "Disaster frequency accelerating — 50%+ increase over prior decade",
    "2 extreme weather events — elevated climate trajectory"
  ],
  "underwritingNotes": [
    "Dominant peril: Natural Disaster (FEMA) — consider exclusions or sublimits",
    "Climate trajectory rapidly worsening — review at shorter intervals",
    "Multiple pollution exceedances — environmental liability endorsement recommended"
  ],
  "compositePeril": {
    "score": 68,
    "disasterCount": 18,
    "earthquakeRisk": 14.5,
    "weatherAlerts": 4,
    "floodRisk": 3,
    "perilLevel": "HIGH",
    "dominantPeril": "Natural Disaster (FEMA)"
  },
  "environmentalContamination": {
    "score": 44,
    "airQualityIndex": 38,
    "pollutantCount": 5,
    "exceedances": 3,
    "contaminationLevel": "ELEVATED",
    "pollutants": [
      { "parameter": "pm25", "value": 34.2, "unit": "µg/m³" },
      { "parameter": "no2", "value": 41.8, "unit": "µg/m³" },
      { "parameter": "o3", "value": 118.0, "unit": "µg/m³" }
    ]
  },
  "crimeExposure": {
    "score": 38,
    "totalCrimes": 94,
    "violentCrimes": 7,
    "propertyCrimes": 22,
    "exposureLevel": "MODERATE",
    "topCategories": [
      { "category": "theft", "count": 31 },
      { "category": "criminal damage", "count": 18 },
      { "category": "burglary", "count": 14 }
    ]
  },
  "climateTrajectory": {
    "score": 72,
    "projectedRisk5yr": 78,
    "projectedRisk10yr": 84,
    "projectedRisk25yr": 96,
    "trendDirection": "RAPIDLY_WORSENING",
    "climateFactors": ["Accelerating disaster frequency", "Extreme weather patterns"]
  },
  "propertyData": [
    { "price": 4750000, "propertyType": "Commercial", "date": "2025-09-14" }
  ]
}

Output fields

FieldTypeDescription
locationstringInput location as provided
coordinatesobject|nullGeocoded {lat, lon} or null if geocoding failed
compositeRiskScorenumberWeighted composite 0-100: Peril 35%, Climate 25%, Crime 20%, Env 20%
riskTierstringPREFERRED / STANDARD / SUBSTANDARD / DECLINE
premiumModifiernumberMultiplier 0.80–2.50 computed as 0.80 + (score/100) × 1.70
allSignalsstring[]Consolidated narrative risk signals from all four scoring models
underwritingNotesstring[]Actionable notes: referral triggers, exclusion and endorsement recommendations
compositePeril.scorenumberPeril sub-score 0-100 (FEMA 30pts + Seismic 25pts + Weather 25pts + Flood 20pts)
compositePeril.perilLevelstringMINIMAL / LOW / MODERATE / HIGH / SEVERE
compositePeril.dominantPerilstringHighest-scoring peril category name
compositePeril.disasterCountnumberTotal FEMA declarations returned
compositePeril.earthquakeRisknumberRaw seismic score before 25-point cap
compositePeril.weatherAlertsnumberNOAA alert count
compositePeril.floodRisknumberUK flood warning count
environmentalContamination.scorenumberContamination score 0-100 (AQI 50pts + Exceedances 30pts + Diversity 20pts)
environmentalContamination.contaminationLevelstringCLEAN / ACCEPTABLE / ELEVATED / HAZARDOUS
environmentalContamination.exceedancesnumberPollutants exceeding WHO 2021 guidelines
environmentalContamination.pollutantsarrayUp to 10 readings: {parameter, value, unit}
crimeExposure.scorenumberCrime score 0-100 (Violent 40pts + Property 30pts + Volume 20pts + ASB 10pts)
crimeExposure.exposureLevelstringLOW / MODERATE / HIGH / EXTREME
crimeExposure.violentCrimesnumberViolent crime incident count
crimeExposure.propertyCrimesnumberProperty crime incident count
crimeExposure.topCategoriesarrayUp to 8 crime categories with counts
climateTrajectory.scorenumberClimate trajectory score 0-100
climateTrajectory.trendDirectionstringIMPROVING / STABLE / WORSENING / RAPIDLY_WORSENING
climateTrajectory.projectedRisk5yrnumberProjected score at 5-year horizon
climateTrajectory.projectedRisk10yrnumberProjected score at 10-year horizon
climateTrajectory.projectedRisk25yrnumberProjected score at 25-year horizon
propertyDataarrayUp to 10 UK Land Registry transactions for local valuation context

How much does it cost to run insurance underwriting assessments?

This MCP uses pay-per-event pricing — you pay $0.045 per tool call. All 8 tools are priced identically. Platform compute costs are included.

ScenarioTool callsCost per callTotal cost
Quick test — single location risk1$0.045$0.045
Daily triage — 20 submissions20$0.045$0.90
Weekly batch — 100 risk assessments100$0.045$4.50
Monthly portfolio — 500 assessments500$0.045$22.50
Enterprise — 2,000 renewals/month2,000$0.045$90.00

You can set a maximum spending limit per run to control costs. The server stops processing when your budget is reached and returns a structured error for any remaining calls.

Compare this to commercial catastrophe modeling subscriptions (RMS, AIR Worldwide, CoreLogic) which start at $15,000-50,000 per year. For routine triage and pre-screening workflows, most underwriting teams using this MCP spend $20-100 per month with no subscription commitment.

How to connect using the API

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "insurance-underwriting": {
      "url": "https://insurance-underwriting-intelligence-mcp.apify.actor/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_APIFY_TOKEN"
      }
    }
  }
}

Python

import httpx
import json

APIFY_TOKEN = "YOUR_APIFY_TOKEN"
MCP_URL = "https://insurance-underwriting-intelligence-mcp.apify.actor/mcp"

payload = {
    "jsonrpc": "2.0",
    "method": "tools/call",
    "params": {
        "name": "generate_underwriting_brief",
        "arguments": {
            "location": "1400 Brickell Avenue, Miami, FL"
        }
    },
    "id": 1
}

response = httpx.post(
    MCP_URL,
    json=payload,
    headers={"Authorization": f"Bearer {APIFY_TOKEN}"}
)

result = response.json()
brief = json.loads(result["result"]["content"][0]["text"])
print(f"Risk Tier: {brief['riskTier']}")
print(f"Composite Score: {brief['compositeRiskScore']}/100")
print(f"Premium Modifier: {brief['premiumModifier']}x")
for note in brief.get("underwritingNotes", []):
    print(f"  - {note}")

JavaScript

const APIFY_TOKEN = "YOUR_APIFY_TOKEN";
const MCP_URL = "https://insurance-underwriting-intelligence-mcp.apify.actor/mcp";

const response = await fetch(MCP_URL, {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": `Bearer ${APIFY_TOKEN}`
  },
  body: JSON.stringify({
    jsonrpc: "2.0",
    method: "tools/call",
    params: {
      name: "assess_location_risk",
      arguments: {
        location: "Houston, TX",
        latitude: 29.7604,
        longitude: -95.3698
      }
    },
    id: 1
  })
});

const result = await response.json();
const peril = JSON.parse(result.result.content[0].text);
console.log(`Peril Level: ${peril.compositePeril.perilLevel}`);
console.log(`Score: ${peril.compositePeril.score}/100`);
console.log(`Dominant Peril: ${peril.compositePeril.dominantPeril}`);
for (const signal of peril.compositePeril.signals) {
  console.log(`  Signal: ${signal}`);
}

cURL

# Run a full underwriting brief
curl -X POST "https://insurance-underwriting-intelligence-mcp.apify.actor/mcp" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_APIFY_TOKEN" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tools/call",
    "params": {
      "name": "generate_underwriting_brief",
      "arguments": {
        "location": "New Orleans, LA",
        "latitude": 29.9511,
        "longitude": -90.0715
      }
    },
    "id": 1
  }'

# List all available tools
curl -X POST "https://insurance-underwriting-intelligence-mcp.apify.actor/mcp" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_APIFY_TOKEN" \
  -d '{"jsonrpc":"2.0","method":"tools/list","params":{},"id":2}'

How Insurance Underwriting Intelligence MCP Server works

Geocoding and spatial query preparation

Every tool that accepts a location string first resolves it to decimal coordinates via the nominatim-geocoder actor (OpenStreetMap Nominatim). If coordinates are provided directly, this step is skipped. Resolved coordinates feed USGS, NOAA, and OpenAQ for spatial queries; FEMA and UK sources receive the raw location string.

Parallel data collection across 8 sources

runActorsParallel dispatches all calls simultaneously using Promise.allSettled. A slow or unreachable source returns an empty array rather than blocking the others. The full brief runs up to 8 actors in parallel — FEMA, USGS, NOAA, UK Flood Warnings, UK Police, OpenAQ, UK Land Registry, and Nominatim — each allocated 256MB and a 120-second timeout.

Four independent scoring models

Composite Peril scoring weights FEMA major disaster history (max 30 points) using a declarationType filter for DR-type declarations, seismic exposure (max 25) using a magnitude-tiered point system (Mag 6+ = 10pts, Mag 5+ = 5pts, Mag 4+ = 2pts, below 4 = 0.5pts), NOAA weather severity (max 25) using an EXTREME/SEVERE/MODERATE/OTHER weighting, and UK flood warning severity (max 20) using a filter for "severe", "danger", and "warning" severity strings.

Environmental Contamination scoring compares each OpenAQ measurement to WHO 2021 guideline thresholds for 6 pollutants. Measurements that exceed a threshold contribute weighted AQI points proportional to their exceedance ratio (value / threshold × 25). Pollutant diversity (unique parameter count) adds up to 20 additional points to capture complex contamination scenarios.

Crime Exposure scoring classifies UK Police crime categories into 7 violent crime types and 6 property crime types using substring matching. Violent crime contributes 4 points each (max 40), property crime 2 points each (max 30), total volume contributes a log-scaled score (max 20), and anti-social behaviour adds up to 10 points. The log2 volume scaling prevents high-volume low-severity areas from scoring above genuinely dangerous ones.

Climate Trajectory scoring calculates a decade-over-decade acceleration ratio from FEMA declaration timestamps. It compares the average annual disaster count for the most recent 10 years against the prior 10 years. An acceleration ratio above 1.5 triggers the "accelerating" signal. Forward projections at 5, 10, and 25 years use an exponential growth model: currentScore × accelerationRatio^(horizon/20).

Composite brief and premium modifier

The final composite score weights Composite Peril at 35%, Climate Trajectory at 25%, Crime Exposure at 20%, and Environmental Contamination at 20%. The premium modifier is computed as 0.80 + (compositeRiskScore / 100) × 1.70, producing a range from 0.80 (lowest-risk preferred accounts) to 2.50 (highest-risk). Underwriting notes are generated by conditional threshold checks against each sub-score.

Tips for best results

  1. Use assess_location_risk as a triage filter. Run the quick four-source assessment first on every submission. Only escalate to generate_underwriting_brief for MODERATE and above results. This cuts spend on low-risk submissions by 75%.

  2. Supply coordinates for US properties. USGS and NOAA return more precise results with coordinate-based spatial queries than text searches. Geocode your property schedule before bulk assessments to skip auto-geocoding latency.

  3. Run analyze_disaster_history at state level for trend analysis. FEMA records are most complete at state and county granularity. City-level queries return fewer records for rural areas.

  4. Review climate trajectory before long-term policy commitments. A STANDARD current score with a RAPIDLY_WORSENING trajectory may be mispriced on a 10-year policy. project_climate_trajectory surfaces these mismatches before they become loss events.

  5. Set a per-run spending limit for batch portfolio work. The server returns a structured eventChargeLimitReached error when your budget cap is hit — easy to catch and log without aborting your entire batch.

  6. Cross-reference crime scores with insured sector. The Crime Exposure Gradient is most informative for retail, hospitality, and light industrial risks. For office properties, weight environmental and climate scores more heavily.

Combine with other Apify actors

ActorHow to combine
Website Contact ScraperExtract broker or property manager contact details after identifying high-risk locations requiring manual follow-up
Company Deep ResearchSupplement property risk with business-level intelligence on the insured entity for commercial package underwriting
B2B Lead QualifierScore commercial prospects by location risk before routing to underwriters — filter out uninsurable geographies upstream
Trustpilot Review AnalyzerAssess reputation risk for retail or hospitality risks by analysing public sentiment data alongside peril scores
Website Tech Stack DetectorFor tech-sector commercial risks, detect infrastructure and cybersecurity posture as a supplementary risk signal
WHOIS Domain LookupVerify insured entity domain registration and corporate identity as part of a broader due diligence workflow
Multi-Review AnalyzerPull Trustpilot and BBB reviews for the insured business to assess claims history proxies and customer complaint patterns

Limitations

  • UK crime and flood data onlyscore_crime_proximity and check_flood_risk draw from UK government sources. For non-UK addresses, crime data will be absent and flood results will be limited to FEMA flood disasters only.
  • FEMA coverage is US-onlyanalyze_disaster_history and the FEMA component of assess_location_risk only cover federal disaster declarations in the United States. International property risk requires alternative data sources.
  • Not a replacement for cat modeling — this tool provides data signals from public sources. It does not replicate RMS RiskLink, AIR Touchstone, or CoreLogic's probabilistic damage functions. Use as a triage and pre-screening layer, not as the sole basis for pricing.
  • OpenAQ data availability is station-dependent — air quality measurements are only as current and dense as the nearest monitoring station network. Some rural or developing-country locations may return zero readings.
  • Seismic scoring is frequency-based, not probabilistic — the USGS scoring model counts and weights historical events but does not compute probabilistic annual exceedance (PAE) rates. For seismic-sensitive commercial risks, this should supplement, not replace, a site-specific PGA analysis.
  • Climate projections use a simplified acceleration model — the exponential forward projection based on FEMA decade-over-decade acceleration is indicative, not actuarially certified. Treat 25-year projections as directional rather than precise.
  • Geocoding accuracy varies by address quality — Nominatim performs best with structured addresses. Partial addresses, PO boxes, or rural routes may resolve to county centroids rather than precise property coordinates.
  • No building-level data — scores reflect location hazard exposure, not property construction quality, age, occupancy type, or loss history. Building characteristics must be assessed separately.

Integrations

  • Claude Desktop — add this server to claude_desktop_config.json and ask for underwriting risk assessments in natural language from within your Claude session
  • Cursor / Windsurf / Cline — connect as an MCP server in any compatible IDE for underwriting intelligence within your development or data analysis workflow
  • Apify API — call all 8 tools programmatically from Python, JavaScript, or any HTTP client for batch portfolio processing
  • Zapier — trigger underwriting assessments from new rows in Google Sheets or new submissions in your intake form
  • Make — build no-code workflows that run assessments on new property submissions and push results to Airtable, Notion, or your CRM
  • Google Sheets — export risk scores and tier classifications into spreadsheets for underwriting review and portfolio tracking
  • Webhooks — push completed assessment JSON to your underwriting workbench, policy management system, or compliance platform automatically
  • LangChain / LlamaIndex — integrate underwriting intelligence into AI-powered policy review agents and risk analysis pipelines

Troubleshooting

  • All tools return empty results despite a valid location — Check that your Apify API token is included in the Authorization: Bearer header. Without authentication, the standby server will reject requests. Verify the token at Apify Console > Settings > Integrations.

  • Crime and flood scores are always zero for US addresses — Expected behaviour. score_crime_proximity uses UK Police data and check_flood_risk prioritises UK Environment Agency warnings. For US flood data, use the FEMA flood disaster component from check_flood_risk or the full generate_underwriting_brief which includes both sources.

  • generate_underwriting_brief times out on some locations — The full brief dispatches 8 parallel actor calls with a 120-second timeout each. Network latency or a slow upstream data source can occasionally exceed this. Retry once; if the issue persists, use assess_location_risk (4 sources) or individual tools instead.

  • Seismic score is high for a location with no known fault lines — USGS returns all earthquakes within the search radius, which may include distant events above the minimum magnitude (2.5 by default). Reduce radiusKm from the default 100 to 25-50 for dense urban areas to limit results to genuinely proximate seismic activity.

  • Climate trajectory shows RAPIDLY_WORSENING despite a low current score — The trajectory model uses FEMA disaster frequency acceleration regardless of absolute disaster count. A location with a small but growing number of declarations (e.g. 1 per decade rising to 3 per decade) will register a high acceleration ratio. Review the raw disaster count in allSignals to contextualise the trajectory rating.

Responsible use

  • This MCP server accesses publicly available government data from FEMA, USGS, NOAA, UK Environment Agency, UK Police, HM Land Registry, and OpenAQ.
  • All data sources are official government databases made available for public access and research.
  • Risk scores and underwriting recommendations are data signals derived from public records — they are not actuarially certified ratings. Underwriting decisions must be reviewed by qualified professionals.
  • Do not use outputs as the sole basis for coverage denial without human review. Risk tiers are screening aids, not binding determinations.
  • For guidance on web scraping legality, see Apify's guide.

FAQ

How accurate is the Composite Peril Score for insurance underwriting intelligence? The score is a directional risk signal from public government data, not a probabilistic actuarial model. It correlates well with known high-hazard zones but should be used as a triage and pre-screening tool. Final pricing must incorporate building-specific data, loss history, and actuarial judgment.

What regions are covered for insurance underwriting risk assessment? US coverage: FEMA disaster declarations, USGS earthquakes, NOAA weather alerts. UK coverage: Environment Agency flood warnings, UK Police crime data, HM Land Registry. Global data: USGS earthquakes (worldwide) and OpenAQ air quality (80+ countries). Crime data is currently UK-only.

How is this different from RMS or AIR Worldwide catastrophe modeling software? RMS and AIR provide probabilistic cat models with certified annual exceedance probabilities required for treaty reinsurance pricing. This MCP delivers real-time public-source risk signals for triage, submission screening, and pre-survey intelligence. Use it upstream of cat models, not as a replacement.

Does insurance underwriting intelligence replace a physical risk survey? No. This provides hazard and exposure intelligence for underwriting triage. It does not assess building construction quality, occupancy, fire protection, or site-specific engineering characteristics that a physical survey captures. Use it to determine which risks warrant survey investment.

How long does a full underwriting brief take to generate? Typically 20-45 seconds. The server dispatches all 8 actor calls in parallel, so total time is determined by the slowest data source rather than their sum. assess_location_risk (4 sources) typically completes in 10-20 seconds.

Can I assess multiple properties in a portfolio for bulk insurance underwriting? Yes. Call tools via the Apify API in a loop from Python or JavaScript. For large schedules (100+ properties), use assess_location_risk as the first-pass filter and only run generate_underwriting_brief on MODERATE and above results to manage cost and processing time.

How much does it cost to run 500 insurance underwriting assessments per month? At $0.045 per tool call, 500 assess_location_risk calls cost $22.50. Running 100 full generate_underwriting_brief calls costs $4.50. A typical workflow of 500 quick assessments with 100 full briefs costs approximately $27 per month — compared to annual cat modeling subscriptions starting at $15,000.

Is it legal to use public government data for insurance underwriting purposes? Yes. All data sources (FEMA, USGS, NOAA, UK Police, UK Environment Agency, HM Land Registry, OpenAQ) are official government databases published for public access. Using public data for risk assessment is standard practice in the insurance industry. For further guidance, see Apify's web scraping legality guide.

What happens if one of the 8 data sources is unavailable? The runActorsParallel function uses Promise.allSettled, so a failed source returns an empty array for that data category. Scoring models treat missing data as zero contribution rather than erroring. Retry the assessment if a critical source (FEMA, USGS) was unavailable.

How does the climate trajectory projection model work for long-term underwriting? The model compares average annual FEMA disaster declarations for the most recent 10 years against the prior 10 years to calculate an acceleration ratio. It then applies currentScore × accelerationRatio^(horizon/20) to project forward. An acceleration ratio above 1.5 signals climate-driven worsening. Treat 25-year projections as a risk flag, not a precise forecast.

Can I integrate insurance underwriting risk scores into my policy management system? Yes. Call tools programmatically via the Apify API and parse the JSON output into your PMS. Webhooks push completed assessments to any HTTP endpoint. For CRM integration, combine with HubSpot Lead Pusher to attach risk scores to account records.

Help us improve

If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:

  1. Go to Account Settings > Privacy
  2. Enable Share runs with public Actor creators

This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.

Support

Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom underwriting intelligence solutions or enterprise integrations, reach out through the Apify platform.

How it works

01

Configure

Set your parameters in the Apify Console or pass them via API.

02

Run

Click Start, trigger via API, webhook, or set up a schedule.

03

Get results

Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.

Use cases

Sales Teams

Build targeted lead lists with verified contact data.

Marketing

Research competitors and identify outreach opportunities.

Data Teams

Automate data collection pipelines with scheduled runs.

Developers

Integrate via REST API or use as an MCP tool in AI workflows.

Ready to try Insurance Underwriting Intelligence MCP Server?

Start for free on Apify. No credit card required.

Open on Apify Store