AIDEVELOPER TOOLS

Open Source Software Supply Chain MCP Server

Open source supply chain risk assessment for AppSec teams, engineering leads, and platform engineers who need more than a CVE scanner. This MCP server aggregates 7 live data sources — GitHub, NVD, CISA KEV, StackExchange, Hacker News, Federal Register, and Congress.gov — into a composite Dependency Risk Score (0-100) with a machine-readable verdict from `LOW_RISK` to `DO_NOT_USE`.

Try on Apify Store
$0.20per event
1
Users (30d)
10
Runs (30d)
90
Actively maintained
Maintenance Pulse
$0.20
Per event

Maintenance Pulse

90/100
Last Build
Today
Last Version
1d ago
Builds (30d)
8
Issue Response
N/A

Cost Estimate

How many results do you need?

dependency_risk_assessments
Estimated cost:$20.00

Pricing

Pay Per Event model. You only pay for what you use.

EventDescriptionPrice
dependency_risk_assessmentAll 7 sources: GitHub + NVD + CISA + StackExchange + HN + Federal Register + Congress.$0.20
maintainer_bus_factorContributor Gini coefficient, activity recency, community support.$0.06
vulnerability_exposure_timelineNVD CVE severity + CISA KEV active exploitation + patch timeline.$0.08
license_compliance_checkCopyleft vs permissive, SBOM regulatory requirements.$0.06
community_health_scoreGitHub stars + StackExchange Q&A + Hacker News visibility.$0.08
sbom_regulatory_trackerFederal Register + Congress SBOM legislation tracking.$0.06
security_incident_monitorHacker News breach reports + new CVE/KEV disclosures.$0.08
compare_package_risksMulti-axis dependency risk comparison.$0.10

Example: 100 events = $20.00 · 1,000 events = $200.00

Connect to your AI agent

Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.

MCP Endpoint
https://ryanclinton--open-source-software-supply-chain-mcp.apify.actor/mcp
Claude Desktop Config
{
  "mcpServers": {
    "open-source-software-supply-chain-mcp": {
      "url": "https://ryanclinton--open-source-software-supply-chain-mcp.apify.actor/mcp"
    }
  }
}

Documentation

Open source supply chain risk assessment for AppSec teams, engineering leads, and platform engineers who need more than a CVE scanner. This MCP server aggregates 7 live data sources — GitHub, NVD, CISA KEV, StackExchange, Hacker News, Federal Register, and Congress.gov — into a composite Dependency Risk Score (0-100) with a machine-readable verdict from LOW_RISK to DO_NOT_USE.

Vulnerability databases tell you a CVE exists. This server tells you whether the project will ever fix it — by combining maintainer bus factor, community health, SBOM regulatory exposure, and active exploitation status into one actionable assessment per $0.045 tool call. No subscription, no per-seat pricing, no CVE-only blindspots.

What data can you access?

Data PointSourceExample
📊 Composite Dependency Risk Score (0-100)All 7 sourcescompositeScore: 71, verdict: "HIGH_RISK"
👤 Maintainer bus factor + Gini coefficientGitHubbusFactorLevel: "SINGLE_POINT", giniCoefficient: 0.87
🔴 Critical CVE count and severity distributionNVDcriticalCVEs: 4, highCVEs: 9
⚡ CISA Known Exploited VulnerabilitiesCISA KEVkevCount: 3, ransomwareCampaignUse: "Known"
📅 Patch timeline: CVEs older than 90 daysNVDunfixedCVEsOver90Days: 5
🏥 Community health level and trajectoryGitHub + StackExchange + HNhealthLevel: "GROWING", stars: 14200
📜 License type and copyleft riskGitHubcomplianceLevel: "MOSTLY_COMPLIANT", licenseRisk: 0
🏛️ SBOM regulatory signalsFederal Register + CongressregulatorySignals: 6, sbomBills: 3
💬 Developer Q&A activityStackExchangediscussions: 31, answeredQuestions: 24
📰 Security incident mentionsHacker Newsnewsmentions: 9, hnPoints: 340
⚠️ Plain-language risk signalsComposite"3 CISA KEV entries — actively exploited vulnerabilities"
📋 Prioritized remediation recommendationsComposite"Critical CVEs present — patch immediately or replace"

Why use Open Source Software Supply Chain MCP Server?

Most dependency auditing workflows start and end with a CVE scanner. You receive a list of CVE IDs, severity ratings, and no context on whether the project is actively maintained, whether the community is shrinking, or whether a federal mandate now requires SBOM documentation for your entire dependency tree. Assembling that picture manually — across GitHub, NVD, CISA KEV, Stack Overflow, and regulatory databases — takes hours per package.

This MCP server automates the entire intelligence pipeline. One tool call dispatches 7 actors in parallel, scores the results across 4 independent dimensions, and returns a structured verdict with supporting signals and prioritized recommendations. A package with one critical CVE and 500 active contributors is a very different risk from one with the same CVE and a single dormant maintainer.

  • Scheduling — run weekly supply chain audits on critical dependencies to detect new CVEs, KEV additions, or contributor departures before they become incidents
  • API access — trigger assessments from Python, JavaScript, or any HTTP client inside your CI/CD pipeline
  • Proxy rotation — all upstream data fetching uses Apify's built-in proxy infrastructure for reliable access at scale
  • Monitoring — receive Slack or email alerts when scheduled runs flag new HIGH_RISK packages or unexpected score increases
  • Integrations — pipe results to Zapier, Make, Jira, or your ITSM platform to create tickets automatically on verdict changes

Features

  • Composite Dependency Risk Score (0-100) using a weighted formula: vulnerability exposure (35%), bus factor risk (25%), inverse community health (20%), inverse SBOM compliance (20%)
  • Maintainer bus factor analysis with contributor Gini coefficient calculation — detects single-maintainer projects where one departure abandons the codebase
  • Activity recency scoring — penalizes repositories with no commits in 90, 180, or 365+ days with escalating risk contributions up to 20 points
  • CVE severity distribution from NVD — maps CRITICAL (×10 pts), HIGH (×5 pts), and MEDIUM (×2 pts) counts to a 40-point exposure subscale
  • CISA KEV cross-reference — identifies CVEs with confirmed active exploitation in the wild, scoring 8 points each on the vulnerability subscale
  • Ransomware campaign detection — applies an additional 5-point multiplier for CVEs confirmed used in ransomware campaigns via the CISA KEV knownRansomwareCampaignUse field
  • Hard-override rule — any package with both a CISA KEV entry and a SINGLE_POINT bus factor is forced to DO_NOT_USE verdict regardless of the numeric composite score
  • License copyleft detection — classifies GPL, AGPL, LGPL, and SSPL licenses against permissive alternatives (MIT, Apache, BSD, ISC) for enterprise distribution risk
  • SBOM compliance readiness — scans Federal Register and Congress.gov for active SBOM mandates and congressional bills affecting the package's regulatory footprint
  • Community health index — weighs GitHub stars (35 pts), StackExchange answered questions (25 pts), Hacker News point totals (25 pts), and cross-signal ecosystem vibrancy (15 pts)
  • Parallel actor execution — all 7 data source actors run concurrently with 120-second per-source timeouts, minimizing end-to-end latency
  • 8 specialized tools — from full composite assessment to focused single-dimension queries for targeted investigation workflows
  • Spending limit enforcement — every tool call checks the per-run charge budget before executing, preventing runaway costs in automated pipelines

Use cases for OSS supply chain risk intelligence

AppSec team dependency audits

AppSec engineers responsible for vulnerability management need to prioritize which transitive dependencies require immediate action. A CVE ID alone does not tell you whether the project ships patches quarterly or has been abandoned for two years. The dependency_risk_assessment tool returns bus factor, patch history signals, and community health in one call, letting teams triage accurately instead of treating every CVE as equally urgent.

Engineering lead package selection

Before adopting a new library, a senior engineer or tech lead wants to know whether it is actively maintained, whether the license creates distribution obligations, and whether it has a history of unpatched critical CVEs. The compare_package_risks tool profiles a candidate package alongside its alternative, so the decision is based on community data rather than GitHub star counts alone.

SBOM compliance and procurement

Organizations subject to Executive Order 14028, NIST guidelines, or contractual SBOM requirements need to track which regulations apply to their OSS usage. The sbom_regulatory_tracker tool queries Federal Register and Congress.gov in real time, scoring the density of SBOM-related regulatory activity so compliance teams stay ahead of new requirements before they become audit findings.

Security incident response

When a supply chain attack surfaces — Log4Shell, XZ Utils, SolarWinds — incident responders need to know immediately how severe the CVE profile is and whether CISA has confirmed active exploitation. The security_incident_monitor tool cross-references Hacker News breach coverage with NVD CVE data and the CISA KEV catalog simultaneously, returning a structured vulnLevel and all associated signals.

Platform engineering and CI/CD gates

Platform engineers embedding dependency risk checks in build pipelines need a machine-readable verdict they can gate on. dependency_risk_assessment returns a structured JSON verdict (LOW_RISK, ACCEPTABLE, REVIEW_NEEDED, HIGH_RISK, DO_NOT_USE) that maps cleanly to CI pass/fail logic without manual score interpretation.

Vendor and procurement due diligence

Procurement teams evaluating software vendors who bundle OSS components need defensible evidence of supply chain health. Running dependency_risk_assessment across a vendor's declared dependencies produces documented risk scores that can be included in due diligence artifacts and contract negotiations.

How to use OSS supply chain risk intelligence

  1. Connect the MCP server — add https://open-source-software-supply-chain-mcp.apify.actor/mcp to your MCP client (Claude Desktop, Cursor, Windsurf, Cline, or any MCP-compatible tool). Include your Apify API token as Authorization: Bearer YOUR_TOKEN.
  2. Choose a tool for your use case — for a full risk picture, call dependency_risk_assessment with a package name like "lodash" or "openssl". For a focused check, call vulnerability_exposure_timeline or maintainer_bus_factor directly.
  3. Read the verdict and signals — the response includes a verdict field (LOW_RISK through DO_NOT_USE), a compositeScore from 0-100, and an allSignals array with plain-language risk explanations.
  4. Act on the recommendations — the recommendations array in each response contains specific action items: patch urgency, alternative package evaluation, license review requirements, or migration planning.

MCP tools

ToolPriceData SourcesDescription
dependency_risk_assessment$0.045All 7Full composite risk score (0-100) with bus factor, CVE exposure, community health, and SBOM compliance. Returns verdict + recommendations.
maintainer_bus_factor$0.045GitHub + StackExchangeContributor Gini coefficient, contributor count, activity recency score. Identifies single-maintainer abandonment risk.
vulnerability_exposure_timeline$0.045NVD + CISA KEVCVE severity distribution, active exploitation status, CVEs older than 90 days without patches.
license_compliance_check$0.045GitHub + Federal RegisterCopyleft vs permissive classification, SBOM regulatory requirements, license conflict detection.
community_health_score$0.045GitHub + StackExchange + HNStars, Q&A volume, Hacker News visibility. Returns healthLevel: DEAD through THRIVING.
sbom_regulatory_tracker$0.045Federal Register + CongressActive SBOM mandates, federal regulations, congressional bills on software supply chain.
security_incident_monitor$0.045HN + NVD + CISA KEVBreach reports, new CVE disclosures, CISA KEV additions for a specific package.
compare_package_risks$0.045GitHub + NVD + KEV + SE + HNSide-by-side composite risk profile for package selection decisions.

Tool input parameters

ParameterToolTypeRequiredDefaultDescription
packageAll except sbom_regulatory_trackerstringYesPackage or library name (e.g., "lodash", "openssl", "log4j")
ecosystemdependency_risk_assessmentstringNoEcosystem hint: npm, pypi, cargo, maven, go
alternativecompare_package_risksstringNoAlternative package name for side-by-side comparison
topicsbom_regulatory_trackerstringNo"software supply chain SBOM"Regulatory topic to track in Federal Register and Congress

Input examples

Full dependency risk assessment for a critical library:

{
  "tool": "dependency_risk_assessment",
  "package": "log4j",
  "ecosystem": "maven"
}

Compare two alternative HTTP client libraries:

{
  "tool": "compare_package_risks",
  "package": "axios",
  "alternative": "got"
}

Track current SBOM regulation landscape:

{
  "tool": "sbom_regulatory_tracker",
  "topic": "software bill of materials executive order"
}

Input tips

  • Use full ecosystem-qualified names for CVE matching"apache struts" returns more targeted CVEs than "struts". For Maven artifacts, include the group ID where known.
  • Start with dependency_risk_assessment — the full composite covers the most ground per dollar. Use focused tools for follow-up investigation on specific flagged subscores.
  • Pair vulnerability_exposure_timeline with security_incident_monitor — for known high-risk packages, run both to get CVE history alongside community breach reporting.
  • Use sbom_regulatory_tracker quarterly — the regulatory landscape for OSS supply chain changes frequently; a quarterly run captures new Federal Register actions and congressional bill progress.
  • Set a spending limit per run — in automated pipelines, configure the Apify run's maximum cost to prevent runaway charges if a batch query list is larger than expected.

Output example

{
  "package": "log4j",
  "compositeScore": 71,
  "verdict": "HIGH_RISK",
  "busFactor": {
    "score": 22,
    "contributors": 38,
    "giniCoefficient": 0.61,
    "busFactorLevel": "ADEQUATE",
    "signals": [
      "Gini coefficient 0.61 — moderate commit concentration among top contributors"
    ]
  },
  "vulnExposure": {
    "score": 88,
    "criticalCVEs": 4,
    "highCVEs": 9,
    "kevCount": 3,
    "vulnLevel": "CRITICAL",
    "signals": [
      "4 CRITICAL CVEs — immediate remediation required",
      "9 HIGH severity CVEs — significant exposure",
      "3 CISA KEV entries — actively exploited vulnerabilities",
      "2 used in ransomware campaigns — urgent patching needed",
      "5 CVEs older than 90 days without patch — slow remediation"
    ]
  },
  "communityHealth": {
    "score": 72,
    "stars": 3200,
    "discussions": 31,
    "newsmentions": 9,
    "healthLevel": "GROWING",
    "signals": [
      "31 StackExchange discussions — active Q&A community",
      "9 Hacker News mentions — tech community visibility"
    ]
  },
  "sbomCompliance": {
    "score": 44,
    "regulatorySignals": 6,
    "licenseRisk": 0,
    "complianceLevel": "PARTIAL",
    "signals": [
      "4 SBOM-related federal regulations — compliance landscape active",
      "2 relevant bills in Congress — legislative momentum"
    ]
  },
  "allSignals": [
    "Gini coefficient 0.61 — moderate commit concentration among top contributors",
    "4 CRITICAL CVEs — immediate remediation required",
    "9 HIGH severity CVEs — significant exposure",
    "3 CISA KEV entries — actively exploited vulnerabilities",
    "2 used in ransomware campaigns — urgent patching needed",
    "5 CVEs older than 90 days without patch — slow remediation",
    "31 StackExchange discussions — active Q&A community",
    "4 SBOM-related federal regulations — compliance landscape active"
  ],
  "recommendations": [
    "Critical CVEs present — patch immediately or replace dependency",
    "CISA KEV vulnerabilities — actively exploited, urgent action required",
    "High dependency risk — evaluate alternative packages"
  ]
}

Output fields

FieldTypeDescription
packagestringPackage name as provided in the input
compositeScorenumberComposite risk score 0-100. Higher = more risk.
verdictstringLOW_RISK / ACCEPTABLE / REVIEW_NEEDED / HIGH_RISK / DO_NOT_USE
busFactor.scorenumberBus factor risk subscale 0-100
busFactor.contributorsnumberMaximum contributor count detected across matching repos
busFactor.giniCoefficientnumberGini coefficient of commit distribution (0 = equal, 1 = single contributor)
busFactor.busFactorLevelstringHEALTHY / ADEQUATE / CONCENTRATED / FRAGILE / SINGLE_POINT
busFactor.signalsstring[]Plain-language bus factor risk signals
vulnExposure.scorenumberVulnerability exposure subscale 0-100
vulnExposure.criticalCVEsnumberCount of CRITICAL severity CVEs from NVD
vulnExposure.highCVEsnumberCount of HIGH severity CVEs from NVD
vulnExposure.kevCountnumberCount of CISA KEV entries for this package
vulnExposure.vulnLevelstringCLEAN / LOW / MODERATE / HIGH / CRITICAL
vulnExposure.signalsstring[]Plain-language vulnerability signals
communityHealth.scorenumberCommunity health subscale 0-100. Higher = healthier.
communityHealth.starsnumberTotal GitHub stars across matching repositories
communityHealth.discussionsnumberStackExchange question count for this package
communityHealth.newsmentionsnumberHacker News story count mentioning this package
communityHealth.healthLevelstringDEAD / DECLINING / STABLE / GROWING / THRIVING
communityHealth.signalsstring[]Plain-language community health signals
sbomCompliance.scorenumberSBOM compliance readiness 0-100. Higher = more compliant.
sbomCompliance.regulatorySignalsnumberCount of relevant federal regulations + congressional bills
sbomCompliance.licenseRisknumberLicense risk accumulator (higher = more copyleft exposure)
sbomCompliance.complianceLevelstringNON_COMPLIANT / GAPS / PARTIAL / MOSTLY_COMPLIANT / FULLY_COMPLIANT
sbomCompliance.signalsstring[]Plain-language compliance signals
allSignalsstring[]All signals from all four subscores combined
recommendationsstring[]Prioritized remediation recommendations

Composite risk score methodology

Score RangeVerdictInterpretation
0-14LOW_RISKWell-maintained, clean vulnerability history, strong community
15-34ACCEPTABLEMinor concerns; monitor but safe to use
35-54REVIEW_NEEDEDModerate bus factor or CVE exposure; team review recommended
55-74HIGH_RISKCritical CVEs, single maintainer, or declining community; plan remediation
75-100DO_NOT_USEActively exploited vulnerabilities or abandoned project

The composite formula weights vulnerability exposure most heavily (35%) because active exploitation represents the most immediate risk. Bus factor (25%) is next because an abandoned project cannot patch future vulnerabilities. Community health and SBOM compliance each contribute 20% as leading indicators of long-term viability.

Hard override: any package with both a CISA KEV entry and a SINGLE_POINT bus factor is assigned DO_NOT_USE regardless of the numeric composite score.

How much does it cost to assess OSS supply chain risk?

This MCP server uses pay-per-event pricing — every tool call costs $0.045. Platform compute costs are included. You are not charged for failed calls that hit the spending limit.

ScenarioTool callsCost per callTotal cost
Quick test — single package1$0.045$0.045
New project setup — 10 key dependencies10$0.045$0.45
Sprint audit — 50 dependencies50$0.045$2.25
Quarterly audit — 200 packages200$0.045$9.00
Enterprise license audit — 1,000 packages1,000$0.045$45.00

You can set a maximum spending limit per run to control costs. The server enforces this limit per call and returns a structured error when the budget is reached.

Snyk's developer plan starts at $98/month for CVE and license data — with no maintainer health, community viability, or SBOM regulatory context. Most engineering teams spend $2-10/month with this server for targeted assessments, with no subscription commitment.

How to connect this OSS supply chain MCP server

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "open-source-software-supply-chain": {
      "url": "https://open-source-software-supply-chain-mcp.apify.actor/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_APIFY_TOKEN"
      }
    }
  }
}

Cursor / Windsurf / Cline

Add the MCP server URL in your IDE's MCP configuration panel:

https://open-source-software-supply-chain-mcp.apify.actor/mcp

Include Authorization: Bearer YOUR_APIFY_TOKEN as a request header.

Python

import httpx
import json

APIFY_TOKEN = "YOUR_APIFY_TOKEN"
MCP_URL = "https://open-source-software-supply-chain-mcp.apify.actor/mcp"

payload = {
    "jsonrpc": "2.0",
    "method": "tools/call",
    "params": {
        "name": "dependency_risk_assessment",
        "arguments": {
            "package": "requests",
            "ecosystem": "pypi"
        }
    },
    "id": 1
}

response = httpx.post(
    MCP_URL,
    json=payload,
    headers={
        "Content-Type": "application/json",
        "Authorization": f"Bearer {APIFY_TOKEN}"
    },
    timeout=180
)

result = response.json()
content = json.loads(result["result"]["content"][0]["text"])
print(f"Package: {content['package']}")
print(f"Risk Score: {content['compositeScore']}/100 — {content['verdict']}")
print(f"Critical CVEs: {content['vulnExposure']['criticalCVEs']}")
print(f"KEV count: {content['vulnExposure']['kevCount']}")
print(f"Bus factor level: {content['busFactor']['busFactorLevel']}")
for rec in content.get("recommendations", []):
    print(f"  -> {rec}")

JavaScript

const APIFY_TOKEN = "YOUR_APIFY_TOKEN";
const MCP_URL = "https://open-source-software-supply-chain-mcp.apify.actor/mcp";

const response = await fetch(MCP_URL, {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": `Bearer ${APIFY_TOKEN}`
  },
  body: JSON.stringify({
    jsonrpc: "2.0",
    method: "tools/call",
    params: {
      name: "vulnerability_exposure_timeline",
      arguments: { package: "lodash" }
    },
    id: 1
  })
});

const data = await response.json();
const result = JSON.parse(data.result.content[0].text);
console.log(`Package: ${result.package}`);
console.log(`Vuln level: ${result.vulnExposure.vulnLevel}`);
console.log(`Critical CVEs: ${result.vulnExposure.criticalCVEs}`);
console.log(`CISA KEV entries: ${result.vulnExposure.kevCount}`);
for (const signal of result.vulnExposure.signals) {
  console.log(`  Signal: ${signal}`);
}

cURL

# Full dependency risk assessment
curl -X POST "https://open-source-software-supply-chain-mcp.apify.actor/mcp" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_APIFY_TOKEN" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tools/call",
    "params": {
      "name": "dependency_risk_assessment",
      "arguments": {
        "package": "openssl",
        "ecosystem": "c"
      }
    },
    "id": 1
  }'

# Monitor security incidents for a specific package
curl -X POST "https://open-source-software-supply-chain-mcp.apify.actor/mcp" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_APIFY_TOKEN" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tools/call",
    "params": {
      "name": "security_incident_monitor",
      "arguments": { "package": "xz-utils" }
    },
    "id": 2
  }'

How Open Source Software Supply Chain MCP Server works

Phase 1: Parallel data collection across 7 sources

When a tool call arrives, the server dispatches up to 7 Apify actors simultaneously using Promise.all with a 512 MB memory allocation and 120-second timeout per source. The 7 actors are:

  • github-repo-search — public repository metadata: star counts, fork counts, contributor numbers, last-pushed timestamps, open issue counts, license SPDX identifiers, and CI/Actions presence flags
  • nvd-cve-vulnerability-search — National Vulnerability Database CVE entries with CVSS v3 severity (CRITICAL, HIGH, MEDIUM, LOW), published dates, and affected product version ranges
  • cisa-kev-search — CISA Known Exploited Vulnerabilities catalog entries with the knownRansomwareCampaignUse field indicating confirmed ransomware association
  • stackexchange-search — Stack Overflow and related Q&A: question count, is_answered flags, and answer counts as community support proxies
  • hacker-news-search — HN story counts and aggregate point totals as a proxy for tech-community security incident awareness
  • federal-register-search — Federal Register entries matching sbom, software bill, or supply chain keywords in title and abstract fields
  • congress-bill-search — Congressional bills matching software, cyber, or supply chain in bill titles

Tools that require fewer sources dispatch only the relevant subset. maintainer_bus_factor uses 2 actors; vulnerability_exposure_timeline uses 2; sbom_regulatory_tracker uses 2. This reduces both latency and cost for focused queries.

Phase 2: Four-dimensional scoring

The scoring.ts module applies four independent models to the aggregated data:

Bus Factor model (0-100 risk scale): Contributor count maps to a base risk score: single maintainer = 40 pts, 2-3 contributors = 25 pts, 4-5 = 15 pts, 6-10 = 8 pts. A Gini coefficient is then calculated from the contributor count distribution across matching repositories using the formula sumDiff / (n * totalSum) where contributions are sorted ascending and index-weighted. The Gini score contributes up to 20 additional points. Activity recency adds 0-20 points based on days since last push: 0 pts if active within 90 days, 8 pts at 90-180 days, 15 pts at 180-365 days, 20 pts beyond one year stale.

Vulnerability Exposure model (0-100 risk scale): CVE severity maps to points — CRITICAL × 10, HIGH × 5, MEDIUM × 2 — capped at 40. CISA KEV entries score 8 points each plus 5 per confirmed ransomware campaign, capped at 35. CVEs published more than 90 days ago score 3 points each, capped at 15. A compound bonus of 10 points applies when both critical CVEs and KEV entries are present simultaneously.

Community Health model (0-100 positive scale): GitHub stars map to a 35-point scale (10,000+ = 35, 1,000+ = 25, 100+ = 15, 10+ = 8). StackExchange question count and answered-question ratios contribute up to 25 points. Hacker News story count and aggregate point totals contribute up to 25 points. Cross-signal ecosystem vibrancy (both GitHub and StackExchange active simultaneously) adds up to 15 points.

SBOM Compliance model (0-100 positive scale): Copyleft licenses (GPL, AGPL, LGPL, SSPL) add 3 to a risk accumulator; missing licenses add 5. The license score inverts this accumulator up to 30 points. Federal Register SBOM-keyword matches contribute up to 25 points; congressional bill matches contribute up to 25 points; CI/permissive-license readiness indicators (GitHub Actions presence + MIT/Apache/BSD license) add up to 20 points.

Phase 3: Composite score and verdict assignment

The composite formula weights the four subscores:

compositeScore = busFactor.score × 0.25
               + vulnExposure.score × 0.35
               + (100 − communityHealth.score) × 0.20
               + (100 − sbomCompliance.score) × 0.20

Community health and SBOM compliance are inverted so that healthier, more compliant packages reduce overall risk. The composite score maps to a verdict tier at 15, 35, 55, and 75 thresholds. The hard override then applies: if kevCount > 0 AND busFactorLevel === 'SINGLE_POINT', the verdict is forced to DO_NOT_USE regardless of the numeric score.

Phase 4: Signal aggregation and recommendation generation

Signals from all four models are concatenated into a flat allSignals array for easy consumption. Recommendations are generated from conditional checks: critical CVEs trigger an immediate-patch recommendation, CISA KEV entries trigger an urgent-action recommendation, dead community health triggers a migration recommendation, high bus factor triggers a fork-or-replace recommendation, and copyleft license risk triggers a legal review recommendation.

Tips for best results

  1. Use full ecosystem-qualified names for CVE matching. Searching "struts" returns fewer relevant CVEs than "apache struts". For Maven artifacts, include the group ID where known.
  2. Run dependency_risk_assessment first, then drill with focused tools. The full assessment returns all four subscores. If vulnExposure.score dominates, follow up with vulnerability_exposure_timeline for the complete CVE list.
  3. Cross-reference compare_package_risks before adopting new dependencies. Calling it with your current package and its leading alternative surfaces relative risk profiles in one round trip.
  4. Schedule security_incident_monitor for production critical-path dependencies. Apify scheduling supports daily runs. Pair with a webhook to Slack or PagerDuty when vulnLevel escalates to CRITICAL.
  5. Interpret community health alongside bus factor, not separately. A THRIVING community with a SINGLE_POINT bus factor is still fragile — the community may be active users, not contributors. Check both subscores before concluding a package is safe.
  6. Use sbom_regulatory_tracker before enterprise procurement cycles. The regulatory landscape is changing rapidly; a quarterly run before procurement review boards surfaces new federal mandates before they become compliance gaps.
  7. Combine with static analysis tools for defense in depth. This server covers behavioral and community signals that static scanners miss. For code-level dependency analysis, pair with Syft or OWASP Dependency-Check and bring the composite risk score into your existing workflow via the API.

Combine with other Apify actors

ActorHow to combine
OSS Dependency Risk ReportRun this actor for a formatted HTML/PDF supply chain risk report for stakeholders; use this MCP for interactive AI-driven investigation of flagged packages
Cyber Attack Surface ReportCombine supply chain vulnerability data from this MCP with infrastructure exposure scanning for a complete application security picture
Company Deep ResearchEnrich vendor due diligence with supply chain health scores for the vendor's declared OSS dependencies
Website Tech Stack DetectorDetect which OSS libraries a target website uses (100+ technology fingerprints), then feed each into dependency_risk_assessment for a full supply chain audit
Federal Contract IntelligenceIdentify federal contractors subject to SBOM mandates, then use sbom_regulatory_tracker to assess their compliance landscape
SEC EDGAR Filing AnalyzerCross-reference cybersecurity risk disclosures in 10-K filings with supply chain vulnerability scores for publicly traded software vendors
B2B Lead QualifierScore security vendor leads against their own OSS supply chain health to assess credibility before outreach

Limitations

  • GitHub matching is keyword-based, not registry-resolved. The server searches GitHub by package name. It does not resolve npm, PyPI, or Cargo registry metadata to canonical repository URLs. Packages with generic names may return less relevant results.
  • No transitive dependency tree traversal. The server analyzes individual packages named in the input. It does not read package.json, requirements.txt, or lock files. For full SBOM generation, use Syft or CycloneDX CLI and then analyze key dependencies here.
  • CVE matching depends on naming precision. NVD searches are text-based. A query for "react" matches any CVE mentioning React. Specificity matters: "react dom" or "facebook react" returns more targeted results.
  • GitHub contributor data reflects repository search results, not registry contribution graphs. Contributor count and Gini coefficient are derived from repository metadata fields, not full commit history.
  • Community health scores reflect current state, not trends. A project can have a historically high star count while being effectively abandoned. The activity recency signal partially addresses this, but trend analysis over time requires multiple runs.
  • SBOM compliance scoring reflects regulatory landscape density, not your specific compliance obligations. A high regulatorySignals count means the regulatory environment is active, not that your use case is necessarily out of compliance. Consult legal counsel for compliance determinations.
  • Hacker News incident data is community-curated, not exhaustive. Not all security incidents receive HN coverage. Low newsmentions does not rule out incidents.
  • Private packages and internal forks are not visible. Data is limited to public GitHub repositories. Internally forked or proprietary distributions are outside the scope of this tool.

Integrations

  • Zapier — trigger a security_incident_monitor call when a new CVE alert arrives, and create a Jira ticket if vulnLevel is CRITICAL or HIGH
  • Make — build automated dependency audit workflows that run dependency_risk_assessment on a package list and route HIGH_RISK results to a review queue
  • Google Sheets — export composite risk scores and all signals to a dependency risk register spreadsheet for stakeholder reporting
  • Apify API — embed supply chain checks directly in CI/CD pipelines using the HTTP API with structured JSON responses
  • Webhooks — send POST notifications to Slack or PagerDuty when scheduled security monitoring runs return verdict changes
  • LangChain / LlamaIndex — use this MCP as a tool inside AI security analyst agents that investigate dependency trees and produce risk narratives

Troubleshooting

  • Low CVE counts for a package you expect to have many vulnerabilities — CVE matching is text-based. Try a more specific query: instead of "spring", use "spring framework" or "pivotal spring". Also try the package's canonical vendor name as it appears in NVD product names.
  • Bus factor returning HEALTHY for a known single-maintainer project — GitHub repo search by package name may return popular forks or unrelated projects that inflate the contributor count. Check the signals array — concentration risk signals are appended regardless of the aggregate score.
  • dependency_risk_assessment returning empty signals — If all 7 upstream actors return empty datasets, the composite score defaults to zero with no signals. Check that the package name is spelled correctly and verify the Apify platform status at status.apify.com.
  • Spending limit reached error on first call — The per-run spending limit may be set too low. Each tool call costs $0.045; ensure the run budget is at least $0.10 for a single call, or $5.00 for a batch of 100 packages.
  • Timeout errors on dependency_risk_assessment — This tool dispatches 7 actors with 120-second individual timeouts. Total latency can reach 60-90 seconds under load. Increase client-side timeout to at least 180 seconds.

Responsible use

  • This server only accesses publicly available data from GitHub, NVD, CISA, StackExchange, Hacker News, the Federal Register, and Congress.gov.
  • NVD and CISA KEV are official US government databases. GitHub public repository metadata is openly accessible under GitHub's Terms of Service.
  • Risk scores are analytical indicators, not legal compliance determinations. Do not use scores as the sole basis for procurement or compliance decisions without consulting qualified security and legal professionals.
  • Do not use this server to build automated rejection lists that disadvantage open source maintainers based on bus factor metrics without human review.
  • For guidance on web scraping legality, see Apify's guide.

FAQ

How does open source supply chain risk assessment differ from a standard CVE scanner? A standard CVE scanner reports which CVEs exist. This server adds maintainer bus factor (who will fix them), community health (is the project still viable), license compliance (can you legally distribute it), and SBOM regulatory context (are you required to document it). A package with one critical CVE and an active 500-contributor community is a very different risk from one with the same CVE and a single dormant maintainer.

How accurate is the Gini coefficient for OSS bus factor analysis? The Gini coefficient is calculated from contributor counts returned by GitHub repo search across matching repositories. It approximates commit concentration but does not access full git log history. For projects where a few maintainers hold merge rights over many nominal contributors, the score may understate concentration risk. Use busFactorLevel as a directional indicator, not a precise measurement.

How current is the vulnerability data in OSS supply chain risk assessments? NVD CVE and CISA KEV data are fetched live at query time. New CVE disclosures and KEV catalog additions published today will appear in results immediately. There is no caching layer; each tool call queries upstream sources fresh.

Can I use this MCP server to scan an entire dependency tree automatically? The server analyzes individual packages passed by name. To cover a full tree, generate a dependency list from your package manager (npm list --depth=0, pip freeze, cargo tree) and call dependency_risk_assessment for each direct dependency. Automate the batch via the HTTP API.

What does DO_NOT_USE mean for an OSS dependency verdict? Either the composite score reached 75+, or the hard override triggered (CISA KEV entry + SINGLE_POINT bus factor simultaneously). In practice, this means the package has confirmed actively exploited vulnerabilities and a single maintainer who may not fix them. The recommendations array specifies whether to patch, replace, or migrate.

How is this different from Snyk or Dependabot for supply chain security? Snyk and Dependabot focus on CVE detection and automated PR generation. They do not score maintainer health, community viability, or SBOM regulatory exposure. This server provides the contextual intelligence those tools omit: a package with no open CVEs but a dead community and a single dormant maintainer represents a future risk that CVE databases will not surface.

Can I schedule open source supply chain monitoring to run daily? Yes. Use Apify scheduling to call security_incident_monitor or dependency_risk_assessment on a cron schedule for your critical-path dependencies. Configure a webhook to notify Slack, PagerDuty, or email when vulnLevel escalates or a new CISA KEV entry appears.

Is it legal to use public CVE and GitHub data for commercial dependency auditing? Yes. NVD and CISA KEV are US government public databases explicitly provided for this purpose. GitHub's public API provides access to public repository metadata under their Terms of Service. StackExchange content is licensed under Creative Commons. Hacker News content is public. This server accesses all sources within their intended parameters.

Does the SBOM compliance score mean my organization is compliant with current regulations? No. The sbomCompliance score reflects the density of SBOM-related regulatory activity in Federal Register and Congressional records, plus the package's technical readiness indicators (CI presence, permissive licensing). It is an analytical signal, not a compliance certification. Consult legal counsel for a determination of your specific SBOM obligations.

How long does a full dependency_risk_assessment call take? Typically 30-90 seconds. The 7 upstream actors run in parallel with 120-second timeouts per source. Most sources respond in 15-40 seconds under normal load. Set your client timeout to at least 180 seconds to avoid premature connection drops.

What package ecosystems does OSS supply chain risk analysis support? The server accepts any package name string. It works best for packages with significant GitHub presence: npm, PyPI, Cargo, Maven, Go modules, Ruby gems. Packages that exist only as commercial distributions with minimal GitHub activity will return lower-confidence scores. The ecosystem parameter on dependency_risk_assessment provides a search hint but does not restrict results.

Can I compare multiple packages in a single OSS supply chain assessment call? The compare_package_risks tool accepts a package and optional alternative parameter and returns a risk profile for the primary package. For comparing more than two packages, make parallel calls to compare_package_risks and aggregate the compositeScore and verdict fields in your client.

Help us improve

If you encounter unexpected results or data quality issues, you can help debug faster by enabling run sharing:

  1. Go to Account Settings > Privacy
  2. Enable Share runs with public Actor creators

This lets us see run details when something goes wrong so we can fix issues faster. Your data is only visible to the actor developer, not publicly.

Support

Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom integrations or enterprise deployments with SLA requirements, reach out through the Apify platform.

How it works

01

Configure

Set your parameters in the Apify Console or pass them via API.

02

Run

Click Start, trigger via API, webhook, or set up a schedule.

03

Get results

Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.

Use cases

Sales Teams

Build targeted lead lists with verified contact data.

Marketing

Research competitors and identify outreach opportunities.

Data Teams

Automate data collection pipelines with scheduled runs.

Developers

Integrate via REST API or use as an MCP tool in AI workflows.

Ready to try Open Source Software Supply Chain MCP Server?

Start for free on Apify. No credit card required.

Open on Apify Store