Open Source Software Supply Chain MCP Server
Open source supply chain risk assessment for AppSec teams, engineering leads, and platform engineers who need more than a CVE scanner. This MCP server aggregates 7 live data sources — GitHub, NVD, CISA KEV, StackExchange, Hacker News, Federal Register, and Congress.gov — into a composite Dependency Risk Score (0-100) with a machine-readable verdict from `LOW_RISK` to `DO_NOT_USE`.
Maintenance Pulse
90/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| dependency_risk_assessment | All 7 sources: GitHub + NVD + CISA + StackExchange + HN + Federal Register + Congress. | $0.20 |
| maintainer_bus_factor | Contributor Gini coefficient, activity recency, community support. | $0.06 |
| vulnerability_exposure_timeline | NVD CVE severity + CISA KEV active exploitation + patch timeline. | $0.08 |
| license_compliance_check | Copyleft vs permissive, SBOM regulatory requirements. | $0.06 |
| community_health_score | GitHub stars + StackExchange Q&A + Hacker News visibility. | $0.08 |
| sbom_regulatory_tracker | Federal Register + Congress SBOM legislation tracking. | $0.06 |
| security_incident_monitor | Hacker News breach reports + new CVE/KEV disclosures. | $0.08 |
| compare_package_risks | Multi-axis dependency risk comparison. | $0.10 |
Example: 100 events = $20.00 · 1,000 events = $200.00
Connect to your AI agent
Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.
https://ryanclinton--open-source-software-supply-chain-mcp.apify.actor/mcp{
"mcpServers": {
"open-source-software-supply-chain-mcp": {
"url": "https://ryanclinton--open-source-software-supply-chain-mcp.apify.actor/mcp"
}
}
}Documentation
Open source supply chain risk assessment for AppSec teams, engineering leads, and platform engineers who need more than a CVE scanner. This MCP server aggregates 7 live data sources — GitHub, NVD, CISA KEV, StackExchange, Hacker News, Federal Register, and Congress.gov — into a composite Dependency Risk Score (0-100) with a machine-readable verdict from LOW_RISK to DO_NOT_USE.
Vulnerability databases tell you a CVE exists. This server tells you whether the project will ever fix it — by combining maintainer bus factor, community health, SBOM regulatory exposure, and active exploitation status into one actionable assessment per $0.045 tool call. No subscription, no per-seat pricing, no CVE-only blindspots.
What data can you access?
| Data Point | Source | Example |
|---|---|---|
| 📊 Composite Dependency Risk Score (0-100) | All 7 sources | compositeScore: 71, verdict: "HIGH_RISK" |
| 👤 Maintainer bus factor + Gini coefficient | GitHub | busFactorLevel: "SINGLE_POINT", giniCoefficient: 0.87 |
| 🔴 Critical CVE count and severity distribution | NVD | criticalCVEs: 4, highCVEs: 9 |
| ⚡ CISA Known Exploited Vulnerabilities | CISA KEV | kevCount: 3, ransomwareCampaignUse: "Known" |
| 📅 Patch timeline: CVEs older than 90 days | NVD | unfixedCVEsOver90Days: 5 |
| 🏥 Community health level and trajectory | GitHub + StackExchange + HN | healthLevel: "GROWING", stars: 14200 |
| 📜 License type and copyleft risk | GitHub | complianceLevel: "MOSTLY_COMPLIANT", licenseRisk: 0 |
| 🏛️ SBOM regulatory signals | Federal Register + Congress | regulatorySignals: 6, sbomBills: 3 |
| 💬 Developer Q&A activity | StackExchange | discussions: 31, answeredQuestions: 24 |
| 📰 Security incident mentions | Hacker News | newsmentions: 9, hnPoints: 340 |
| ⚠️ Plain-language risk signals | Composite | "3 CISA KEV entries — actively exploited vulnerabilities" |
| 📋 Prioritized remediation recommendations | Composite | "Critical CVEs present — patch immediately or replace" |
Why use Open Source Software Supply Chain MCP Server?
Most dependency auditing workflows start and end with a CVE scanner. You receive a list of CVE IDs, severity ratings, and no context on whether the project is actively maintained, whether the community is shrinking, or whether a federal mandate now requires SBOM documentation for your entire dependency tree. Assembling that picture manually — across GitHub, NVD, CISA KEV, Stack Overflow, and regulatory databases — takes hours per package.
This MCP server automates the entire intelligence pipeline. One tool call dispatches 7 actors in parallel, scores the results across 4 independent dimensions, and returns a structured verdict with supporting signals and prioritized recommendations. A package with one critical CVE and 500 active contributors is a very different risk from one with the same CVE and a single dormant maintainer.
- Scheduling — run weekly supply chain audits on critical dependencies to detect new CVEs, KEV additions, or contributor departures before they become incidents
- API access — trigger assessments from Python, JavaScript, or any HTTP client inside your CI/CD pipeline
- Proxy rotation — all upstream data fetching uses Apify's built-in proxy infrastructure for reliable access at scale
- Monitoring — receive Slack or email alerts when scheduled runs flag new
HIGH_RISKpackages or unexpected score increases - Integrations — pipe results to Zapier, Make, Jira, or your ITSM platform to create tickets automatically on verdict changes
Features
- Composite Dependency Risk Score (0-100) using a weighted formula: vulnerability exposure (35%), bus factor risk (25%), inverse community health (20%), inverse SBOM compliance (20%)
- Maintainer bus factor analysis with contributor Gini coefficient calculation — detects single-maintainer projects where one departure abandons the codebase
- Activity recency scoring — penalizes repositories with no commits in 90, 180, or 365+ days with escalating risk contributions up to 20 points
- CVE severity distribution from NVD — maps CRITICAL (×10 pts), HIGH (×5 pts), and MEDIUM (×2 pts) counts to a 40-point exposure subscale
- CISA KEV cross-reference — identifies CVEs with confirmed active exploitation in the wild, scoring 8 points each on the vulnerability subscale
- Ransomware campaign detection — applies an additional 5-point multiplier for CVEs confirmed used in ransomware campaigns via the CISA KEV
knownRansomwareCampaignUsefield - Hard-override rule — any package with both a CISA KEV entry and a
SINGLE_POINTbus factor is forced toDO_NOT_USEverdict regardless of the numeric composite score - License copyleft detection — classifies GPL, AGPL, LGPL, and SSPL licenses against permissive alternatives (MIT, Apache, BSD, ISC) for enterprise distribution risk
- SBOM compliance readiness — scans Federal Register and Congress.gov for active SBOM mandates and congressional bills affecting the package's regulatory footprint
- Community health index — weighs GitHub stars (35 pts), StackExchange answered questions (25 pts), Hacker News point totals (25 pts), and cross-signal ecosystem vibrancy (15 pts)
- Parallel actor execution — all 7 data source actors run concurrently with 120-second per-source timeouts, minimizing end-to-end latency
- 8 specialized tools — from full composite assessment to focused single-dimension queries for targeted investigation workflows
- Spending limit enforcement — every tool call checks the per-run charge budget before executing, preventing runaway costs in automated pipelines
Use cases for OSS supply chain risk intelligence
AppSec team dependency audits
AppSec engineers responsible for vulnerability management need to prioritize which transitive dependencies require immediate action. A CVE ID alone does not tell you whether the project ships patches quarterly or has been abandoned for two years. The dependency_risk_assessment tool returns bus factor, patch history signals, and community health in one call, letting teams triage accurately instead of treating every CVE as equally urgent.
Engineering lead package selection
Before adopting a new library, a senior engineer or tech lead wants to know whether it is actively maintained, whether the license creates distribution obligations, and whether it has a history of unpatched critical CVEs. The compare_package_risks tool profiles a candidate package alongside its alternative, so the decision is based on community data rather than GitHub star counts alone.
SBOM compliance and procurement
Organizations subject to Executive Order 14028, NIST guidelines, or contractual SBOM requirements need to track which regulations apply to their OSS usage. The sbom_regulatory_tracker tool queries Federal Register and Congress.gov in real time, scoring the density of SBOM-related regulatory activity so compliance teams stay ahead of new requirements before they become audit findings.
Security incident response
When a supply chain attack surfaces — Log4Shell, XZ Utils, SolarWinds — incident responders need to know immediately how severe the CVE profile is and whether CISA has confirmed active exploitation. The security_incident_monitor tool cross-references Hacker News breach coverage with NVD CVE data and the CISA KEV catalog simultaneously, returning a structured vulnLevel and all associated signals.
Platform engineering and CI/CD gates
Platform engineers embedding dependency risk checks in build pipelines need a machine-readable verdict they can gate on. dependency_risk_assessment returns a structured JSON verdict (LOW_RISK, ACCEPTABLE, REVIEW_NEEDED, HIGH_RISK, DO_NOT_USE) that maps cleanly to CI pass/fail logic without manual score interpretation.
Vendor and procurement due diligence
Procurement teams evaluating software vendors who bundle OSS components need defensible evidence of supply chain health. Running dependency_risk_assessment across a vendor's declared dependencies produces documented risk scores that can be included in due diligence artifacts and contract negotiations.
How to use OSS supply chain risk intelligence
- Connect the MCP server — add
https://open-source-software-supply-chain-mcp.apify.actor/mcpto your MCP client (Claude Desktop, Cursor, Windsurf, Cline, or any MCP-compatible tool). Include your Apify API token asAuthorization: Bearer YOUR_TOKEN. - Choose a tool for your use case — for a full risk picture, call
dependency_risk_assessmentwith a package name like"lodash"or"openssl". For a focused check, callvulnerability_exposure_timelineormaintainer_bus_factordirectly. - Read the verdict and signals — the response includes a
verdictfield (LOW_RISKthroughDO_NOT_USE), acompositeScorefrom 0-100, and anallSignalsarray with plain-language risk explanations. - Act on the recommendations — the
recommendationsarray in each response contains specific action items: patch urgency, alternative package evaluation, license review requirements, or migration planning.
MCP tools
| Tool | Price | Data Sources | Description |
|---|---|---|---|
dependency_risk_assessment | $0.045 | All 7 | Full composite risk score (0-100) with bus factor, CVE exposure, community health, and SBOM compliance. Returns verdict + recommendations. |
maintainer_bus_factor | $0.045 | GitHub + StackExchange | Contributor Gini coefficient, contributor count, activity recency score. Identifies single-maintainer abandonment risk. |
vulnerability_exposure_timeline | $0.045 | NVD + CISA KEV | CVE severity distribution, active exploitation status, CVEs older than 90 days without patches. |
license_compliance_check | $0.045 | GitHub + Federal Register | Copyleft vs permissive classification, SBOM regulatory requirements, license conflict detection. |
community_health_score | $0.045 | GitHub + StackExchange + HN | Stars, Q&A volume, Hacker News visibility. Returns healthLevel: DEAD through THRIVING. |
sbom_regulatory_tracker | $0.045 | Federal Register + Congress | Active SBOM mandates, federal regulations, congressional bills on software supply chain. |
security_incident_monitor | $0.045 | HN + NVD + CISA KEV | Breach reports, new CVE disclosures, CISA KEV additions for a specific package. |
compare_package_risks | $0.045 | GitHub + NVD + KEV + SE + HN | Side-by-side composite risk profile for package selection decisions. |
Tool input parameters
| Parameter | Tool | Type | Required | Default | Description |
|---|---|---|---|---|---|
package | All except sbom_regulatory_tracker | string | Yes | — | Package or library name (e.g., "lodash", "openssl", "log4j") |
ecosystem | dependency_risk_assessment | string | No | — | Ecosystem hint: npm, pypi, cargo, maven, go |
alternative | compare_package_risks | string | No | — | Alternative package name for side-by-side comparison |
topic | sbom_regulatory_tracker | string | No | "software supply chain SBOM" | Regulatory topic to track in Federal Register and Congress |
Input examples
Full dependency risk assessment for a critical library:
{
"tool": "dependency_risk_assessment",
"package": "log4j",
"ecosystem": "maven"
}
Compare two alternative HTTP client libraries:
{
"tool": "compare_package_risks",
"package": "axios",
"alternative": "got"
}
Track current SBOM regulation landscape:
{
"tool": "sbom_regulatory_tracker",
"topic": "software bill of materials executive order"
}
Input tips
- Use full ecosystem-qualified names for CVE matching —
"apache struts"returns more targeted CVEs than"struts". For Maven artifacts, include the group ID where known. - Start with
dependency_risk_assessment— the full composite covers the most ground per dollar. Use focused tools for follow-up investigation on specific flagged subscores. - Pair
vulnerability_exposure_timelinewithsecurity_incident_monitor— for known high-risk packages, run both to get CVE history alongside community breach reporting. - Use
sbom_regulatory_trackerquarterly — the regulatory landscape for OSS supply chain changes frequently; a quarterly run captures new Federal Register actions and congressional bill progress. - Set a spending limit per run — in automated pipelines, configure the Apify run's maximum cost to prevent runaway charges if a batch query list is larger than expected.
Output example
{
"package": "log4j",
"compositeScore": 71,
"verdict": "HIGH_RISK",
"busFactor": {
"score": 22,
"contributors": 38,
"giniCoefficient": 0.61,
"busFactorLevel": "ADEQUATE",
"signals": [
"Gini coefficient 0.61 — moderate commit concentration among top contributors"
]
},
"vulnExposure": {
"score": 88,
"criticalCVEs": 4,
"highCVEs": 9,
"kevCount": 3,
"vulnLevel": "CRITICAL",
"signals": [
"4 CRITICAL CVEs — immediate remediation required",
"9 HIGH severity CVEs — significant exposure",
"3 CISA KEV entries — actively exploited vulnerabilities",
"2 used in ransomware campaigns — urgent patching needed",
"5 CVEs older than 90 days without patch — slow remediation"
]
},
"communityHealth": {
"score": 72,
"stars": 3200,
"discussions": 31,
"newsmentions": 9,
"healthLevel": "GROWING",
"signals": [
"31 StackExchange discussions — active Q&A community",
"9 Hacker News mentions — tech community visibility"
]
},
"sbomCompliance": {
"score": 44,
"regulatorySignals": 6,
"licenseRisk": 0,
"complianceLevel": "PARTIAL",
"signals": [
"4 SBOM-related federal regulations — compliance landscape active",
"2 relevant bills in Congress — legislative momentum"
]
},
"allSignals": [
"Gini coefficient 0.61 — moderate commit concentration among top contributors",
"4 CRITICAL CVEs — immediate remediation required",
"9 HIGH severity CVEs — significant exposure",
"3 CISA KEV entries — actively exploited vulnerabilities",
"2 used in ransomware campaigns — urgent patching needed",
"5 CVEs older than 90 days without patch — slow remediation",
"31 StackExchange discussions — active Q&A community",
"4 SBOM-related federal regulations — compliance landscape active"
],
"recommendations": [
"Critical CVEs present — patch immediately or replace dependency",
"CISA KEV vulnerabilities — actively exploited, urgent action required",
"High dependency risk — evaluate alternative packages"
]
}
Output fields
| Field | Type | Description |
|---|---|---|
package | string | Package name as provided in the input |
compositeScore | number | Composite risk score 0-100. Higher = more risk. |
verdict | string | LOW_RISK / ACCEPTABLE / REVIEW_NEEDED / HIGH_RISK / DO_NOT_USE |
busFactor.score | number | Bus factor risk subscale 0-100 |
busFactor.contributors | number | Maximum contributor count detected across matching repos |
busFactor.giniCoefficient | number | Gini coefficient of commit distribution (0 = equal, 1 = single contributor) |
busFactor.busFactorLevel | string | HEALTHY / ADEQUATE / CONCENTRATED / FRAGILE / SINGLE_POINT |
busFactor.signals | string[] | Plain-language bus factor risk signals |
vulnExposure.score | number | Vulnerability exposure subscale 0-100 |
vulnExposure.criticalCVEs | number | Count of CRITICAL severity CVEs from NVD |
vulnExposure.highCVEs | number | Count of HIGH severity CVEs from NVD |
vulnExposure.kevCount | number | Count of CISA KEV entries for this package |
vulnExposure.vulnLevel | string | CLEAN / LOW / MODERATE / HIGH / CRITICAL |
vulnExposure.signals | string[] | Plain-language vulnerability signals |
communityHealth.score | number | Community health subscale 0-100. Higher = healthier. |
communityHealth.stars | number | Total GitHub stars across matching repositories |
communityHealth.discussions | number | StackExchange question count for this package |
communityHealth.newsmentions | number | Hacker News story count mentioning this package |
communityHealth.healthLevel | string | DEAD / DECLINING / STABLE / GROWING / THRIVING |
communityHealth.signals | string[] | Plain-language community health signals |
sbomCompliance.score | number | SBOM compliance readiness 0-100. Higher = more compliant. |
sbomCompliance.regulatorySignals | number | Count of relevant federal regulations + congressional bills |
sbomCompliance.licenseRisk | number | License risk accumulator (higher = more copyleft exposure) |
sbomCompliance.complianceLevel | string | NON_COMPLIANT / GAPS / PARTIAL / MOSTLY_COMPLIANT / FULLY_COMPLIANT |
sbomCompliance.signals | string[] | Plain-language compliance signals |
allSignals | string[] | All signals from all four subscores combined |
recommendations | string[] | Prioritized remediation recommendations |
Composite risk score methodology
| Score Range | Verdict | Interpretation |
|---|---|---|
| 0-14 | LOW_RISK | Well-maintained, clean vulnerability history, strong community |
| 15-34 | ACCEPTABLE | Minor concerns; monitor but safe to use |
| 35-54 | REVIEW_NEEDED | Moderate bus factor or CVE exposure; team review recommended |
| 55-74 | HIGH_RISK | Critical CVEs, single maintainer, or declining community; plan remediation |
| 75-100 | DO_NOT_USE | Actively exploited vulnerabilities or abandoned project |
The composite formula weights vulnerability exposure most heavily (35%) because active exploitation represents the most immediate risk. Bus factor (25%) is next because an abandoned project cannot patch future vulnerabilities. Community health and SBOM compliance each contribute 20% as leading indicators of long-term viability.
Hard override: any package with both a CISA KEV entry and a SINGLE_POINT bus factor is assigned DO_NOT_USE regardless of the numeric composite score.
How much does it cost to assess OSS supply chain risk?
This MCP server uses pay-per-event pricing — every tool call costs $0.045. Platform compute costs are included. You are not charged for failed calls that hit the spending limit.
| Scenario | Tool calls | Cost per call | Total cost |
|---|---|---|---|
| Quick test — single package | 1 | $0.045 | $0.045 |
| New project setup — 10 key dependencies | 10 | $0.045 | $0.45 |
| Sprint audit — 50 dependencies | 50 | $0.045 | $2.25 |
| Quarterly audit — 200 packages | 200 | $0.045 | $9.00 |
| Enterprise license audit — 1,000 packages | 1,000 | $0.045 | $45.00 |
You can set a maximum spending limit per run to control costs. The server enforces this limit per call and returns a structured error when the budget is reached.
Snyk's developer plan starts at $98/month for CVE and license data — with no maintainer health, community viability, or SBOM regulatory context. Most engineering teams spend $2-10/month with this server for targeted assessments, with no subscription commitment.
How to connect this OSS supply chain MCP server
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"open-source-software-supply-chain": {
"url": "https://open-source-software-supply-chain-mcp.apify.actor/mcp",
"headers": {
"Authorization": "Bearer YOUR_APIFY_TOKEN"
}
}
}
}
Cursor / Windsurf / Cline
Add the MCP server URL in your IDE's MCP configuration panel:
https://open-source-software-supply-chain-mcp.apify.actor/mcp
Include Authorization: Bearer YOUR_APIFY_TOKEN as a request header.
Python
import httpx
import json
APIFY_TOKEN = "YOUR_APIFY_TOKEN"
MCP_URL = "https://open-source-software-supply-chain-mcp.apify.actor/mcp"
payload = {
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "dependency_risk_assessment",
"arguments": {
"package": "requests",
"ecosystem": "pypi"
}
},
"id": 1
}
response = httpx.post(
MCP_URL,
json=payload,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {APIFY_TOKEN}"
},
timeout=180
)
result = response.json()
content = json.loads(result["result"]["content"][0]["text"])
print(f"Package: {content['package']}")
print(f"Risk Score: {content['compositeScore']}/100 — {content['verdict']}")
print(f"Critical CVEs: {content['vulnExposure']['criticalCVEs']}")
print(f"KEV count: {content['vulnExposure']['kevCount']}")
print(f"Bus factor level: {content['busFactor']['busFactorLevel']}")
for rec in content.get("recommendations", []):
print(f" -> {rec}")
JavaScript
const APIFY_TOKEN = "YOUR_APIFY_TOKEN";
const MCP_URL = "https://open-source-software-supply-chain-mcp.apify.actor/mcp";
const response = await fetch(MCP_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${APIFY_TOKEN}`
},
body: JSON.stringify({
jsonrpc: "2.0",
method: "tools/call",
params: {
name: "vulnerability_exposure_timeline",
arguments: { package: "lodash" }
},
id: 1
})
});
const data = await response.json();
const result = JSON.parse(data.result.content[0].text);
console.log(`Package: ${result.package}`);
console.log(`Vuln level: ${result.vulnExposure.vulnLevel}`);
console.log(`Critical CVEs: ${result.vulnExposure.criticalCVEs}`);
console.log(`CISA KEV entries: ${result.vulnExposure.kevCount}`);
for (const signal of result.vulnExposure.signals) {
console.log(` Signal: ${signal}`);
}
cURL
# Full dependency risk assessment
curl -X POST "https://open-source-software-supply-chain-mcp.apify.actor/mcp" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "dependency_risk_assessment",
"arguments": {
"package": "openssl",
"ecosystem": "c"
}
},
"id": 1
}'
# Monitor security incidents for a specific package
curl -X POST "https://open-source-software-supply-chain-mcp.apify.actor/mcp" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "security_incident_monitor",
"arguments": { "package": "xz-utils" }
},
"id": 2
}'
How Open Source Software Supply Chain MCP Server works
Phase 1: Parallel data collection across 7 sources
When a tool call arrives, the server dispatches up to 7 Apify actors simultaneously using Promise.all with a 512 MB memory allocation and 120-second timeout per source. The 7 actors are:
- github-repo-search — public repository metadata: star counts, fork counts, contributor numbers, last-pushed timestamps, open issue counts, license SPDX identifiers, and CI/Actions presence flags
- nvd-cve-vulnerability-search — National Vulnerability Database CVE entries with CVSS v3 severity (
CRITICAL,HIGH,MEDIUM,LOW), published dates, and affected product version ranges - cisa-kev-search — CISA Known Exploited Vulnerabilities catalog entries with the
knownRansomwareCampaignUsefield indicating confirmed ransomware association - stackexchange-search — Stack Overflow and related Q&A: question count,
is_answeredflags, and answer counts as community support proxies - hacker-news-search — HN story counts and aggregate point totals as a proxy for tech-community security incident awareness
- federal-register-search — Federal Register entries matching
sbom,software bill, orsupply chainkeywords in title and abstract fields - congress-bill-search — Congressional bills matching
software,cyber, orsupply chainin bill titles
Tools that require fewer sources dispatch only the relevant subset. maintainer_bus_factor uses 2 actors; vulnerability_exposure_timeline uses 2; sbom_regulatory_tracker uses 2. This reduces both latency and cost for focused queries.
Phase 2: Four-dimensional scoring
The scoring.ts module applies four independent models to the aggregated data:
Bus Factor model (0-100 risk scale): Contributor count maps to a base risk score: single maintainer = 40 pts, 2-3 contributors = 25 pts, 4-5 = 15 pts, 6-10 = 8 pts. A Gini coefficient is then calculated from the contributor count distribution across matching repositories using the formula sumDiff / (n * totalSum) where contributions are sorted ascending and index-weighted. The Gini score contributes up to 20 additional points. Activity recency adds 0-20 points based on days since last push: 0 pts if active within 90 days, 8 pts at 90-180 days, 15 pts at 180-365 days, 20 pts beyond one year stale.
Vulnerability Exposure model (0-100 risk scale): CVE severity maps to points — CRITICAL × 10, HIGH × 5, MEDIUM × 2 — capped at 40. CISA KEV entries score 8 points each plus 5 per confirmed ransomware campaign, capped at 35. CVEs published more than 90 days ago score 3 points each, capped at 15. A compound bonus of 10 points applies when both critical CVEs and KEV entries are present simultaneously.
Community Health model (0-100 positive scale): GitHub stars map to a 35-point scale (10,000+ = 35, 1,000+ = 25, 100+ = 15, 10+ = 8). StackExchange question count and answered-question ratios contribute up to 25 points. Hacker News story count and aggregate point totals contribute up to 25 points. Cross-signal ecosystem vibrancy (both GitHub and StackExchange active simultaneously) adds up to 15 points.
SBOM Compliance model (0-100 positive scale): Copyleft licenses (GPL, AGPL, LGPL, SSPL) add 3 to a risk accumulator; missing licenses add 5. The license score inverts this accumulator up to 30 points. Federal Register SBOM-keyword matches contribute up to 25 points; congressional bill matches contribute up to 25 points; CI/permissive-license readiness indicators (GitHub Actions presence + MIT/Apache/BSD license) add up to 20 points.
Phase 3: Composite score and verdict assignment
The composite formula weights the four subscores:
compositeScore = busFactor.score × 0.25
+ vulnExposure.score × 0.35
+ (100 − communityHealth.score) × 0.20
+ (100 − sbomCompliance.score) × 0.20
Community health and SBOM compliance are inverted so that healthier, more compliant packages reduce overall risk. The composite score maps to a verdict tier at 15, 35, 55, and 75 thresholds. The hard override then applies: if kevCount > 0 AND busFactorLevel === 'SINGLE_POINT', the verdict is forced to DO_NOT_USE regardless of the numeric score.
Phase 4: Signal aggregation and recommendation generation
Signals from all four models are concatenated into a flat allSignals array for easy consumption. Recommendations are generated from conditional checks: critical CVEs trigger an immediate-patch recommendation, CISA KEV entries trigger an urgent-action recommendation, dead community health triggers a migration recommendation, high bus factor triggers a fork-or-replace recommendation, and copyleft license risk triggers a legal review recommendation.
Tips for best results
- Use full ecosystem-qualified names for CVE matching. Searching
"struts"returns fewer relevant CVEs than"apache struts". For Maven artifacts, include the group ID where known. - Run
dependency_risk_assessmentfirst, then drill with focused tools. The full assessment returns all four subscores. IfvulnExposure.scoredominates, follow up withvulnerability_exposure_timelinefor the complete CVE list. - Cross-reference
compare_package_risksbefore adopting new dependencies. Calling it with your current package and its leading alternative surfaces relative risk profiles in one round trip. - Schedule
security_incident_monitorfor production critical-path dependencies. Apify scheduling supports daily runs. Pair with a webhook to Slack or PagerDuty whenvulnLevelescalates toCRITICAL. - Interpret community health alongside bus factor, not separately. A
THRIVINGcommunity with aSINGLE_POINTbus factor is still fragile — the community may be active users, not contributors. Check both subscores before concluding a package is safe. - Use
sbom_regulatory_trackerbefore enterprise procurement cycles. The regulatory landscape is changing rapidly; a quarterly run before procurement review boards surfaces new federal mandates before they become compliance gaps. - Combine with static analysis tools for defense in depth. This server covers behavioral and community signals that static scanners miss. For code-level dependency analysis, pair with Syft or OWASP Dependency-Check and bring the composite risk score into your existing workflow via the API.
Combine with other Apify actors
| Actor | How to combine |
|---|---|
| OSS Dependency Risk Report | Run this actor for a formatted HTML/PDF supply chain risk report for stakeholders; use this MCP for interactive AI-driven investigation of flagged packages |
| Cyber Attack Surface Report | Combine supply chain vulnerability data from this MCP with infrastructure exposure scanning for a complete application security picture |
| Company Deep Research | Enrich vendor due diligence with supply chain health scores for the vendor's declared OSS dependencies |
| Website Tech Stack Detector | Detect which OSS libraries a target website uses (100+ technology fingerprints), then feed each into dependency_risk_assessment for a full supply chain audit |
| Federal Contract Intelligence | Identify federal contractors subject to SBOM mandates, then use sbom_regulatory_tracker to assess their compliance landscape |
| SEC EDGAR Filing Analyzer | Cross-reference cybersecurity risk disclosures in 10-K filings with supply chain vulnerability scores for publicly traded software vendors |
| B2B Lead Qualifier | Score security vendor leads against their own OSS supply chain health to assess credibility before outreach |
Limitations
- GitHub matching is keyword-based, not registry-resolved. The server searches GitHub by package name. It does not resolve npm, PyPI, or Cargo registry metadata to canonical repository URLs. Packages with generic names may return less relevant results.
- No transitive dependency tree traversal. The server analyzes individual packages named in the input. It does not read
package.json,requirements.txt, or lock files. For full SBOM generation, use Syft or CycloneDX CLI and then analyze key dependencies here. - CVE matching depends on naming precision. NVD searches are text-based. A query for
"react"matches any CVE mentioning React. Specificity matters:"react dom"or"facebook react"returns more targeted results. - GitHub contributor data reflects repository search results, not registry contribution graphs. Contributor count and Gini coefficient are derived from repository metadata fields, not full commit history.
- Community health scores reflect current state, not trends. A project can have a historically high star count while being effectively abandoned. The activity recency signal partially addresses this, but trend analysis over time requires multiple runs.
- SBOM compliance scoring reflects regulatory landscape density, not your specific compliance obligations. A high
regulatorySignalscount means the regulatory environment is active, not that your use case is necessarily out of compliance. Consult legal counsel for compliance determinations. - Hacker News incident data is community-curated, not exhaustive. Not all security incidents receive HN coverage. Low
newsmentionsdoes not rule out incidents. - Private packages and internal forks are not visible. Data is limited to public GitHub repositories. Internally forked or proprietary distributions are outside the scope of this tool.
Integrations
- Zapier — trigger a
security_incident_monitorcall when a new CVE alert arrives, and create a Jira ticket ifvulnLevelis CRITICAL or HIGH - Make — build automated dependency audit workflows that run
dependency_risk_assessmenton a package list and routeHIGH_RISKresults to a review queue - Google Sheets — export composite risk scores and all signals to a dependency risk register spreadsheet for stakeholder reporting
- Apify API — embed supply chain checks directly in CI/CD pipelines using the HTTP API with structured JSON responses
- Webhooks — send POST notifications to Slack or PagerDuty when scheduled security monitoring runs return verdict changes
- LangChain / LlamaIndex — use this MCP as a tool inside AI security analyst agents that investigate dependency trees and produce risk narratives
Troubleshooting
- Low CVE counts for a package you expect to have many vulnerabilities — CVE matching is text-based. Try a more specific query: instead of
"spring", use"spring framework"or"pivotal spring". Also try the package's canonical vendor name as it appears in NVD product names. - Bus factor returning HEALTHY for a known single-maintainer project — GitHub repo search by package name may return popular forks or unrelated projects that inflate the contributor count. Check the
signalsarray — concentration risk signals are appended regardless of the aggregate score. dependency_risk_assessmentreturning empty signals — If all 7 upstream actors return empty datasets, the composite score defaults to zero with no signals. Check that the package name is spelled correctly and verify the Apify platform status atstatus.apify.com.- Spending limit reached error on first call — The per-run spending limit may be set too low. Each tool call costs $0.045; ensure the run budget is at least $0.10 for a single call, or $5.00 for a batch of 100 packages.
- Timeout errors on
dependency_risk_assessment— This tool dispatches 7 actors with 120-second individual timeouts. Total latency can reach 60-90 seconds under load. Increase client-side timeout to at least 180 seconds.
Responsible use
- This server only accesses publicly available data from GitHub, NVD, CISA, StackExchange, Hacker News, the Federal Register, and Congress.gov.
- NVD and CISA KEV are official US government databases. GitHub public repository metadata is openly accessible under GitHub's Terms of Service.
- Risk scores are analytical indicators, not legal compliance determinations. Do not use scores as the sole basis for procurement or compliance decisions without consulting qualified security and legal professionals.
- Do not use this server to build automated rejection lists that disadvantage open source maintainers based on bus factor metrics without human review.
- For guidance on web scraping legality, see Apify's guide.
FAQ
How does open source supply chain risk assessment differ from a standard CVE scanner? A standard CVE scanner reports which CVEs exist. This server adds maintainer bus factor (who will fix them), community health (is the project still viable), license compliance (can you legally distribute it), and SBOM regulatory context (are you required to document it). A package with one critical CVE and an active 500-contributor community is a very different risk from one with the same CVE and a single dormant maintainer.
How accurate is the Gini coefficient for OSS bus factor analysis?
The Gini coefficient is calculated from contributor counts returned by GitHub repo search across matching repositories. It approximates commit concentration but does not access full git log history. For projects where a few maintainers hold merge rights over many nominal contributors, the score may understate concentration risk. Use busFactorLevel as a directional indicator, not a precise measurement.
How current is the vulnerability data in OSS supply chain risk assessments? NVD CVE and CISA KEV data are fetched live at query time. New CVE disclosures and KEV catalog additions published today will appear in results immediately. There is no caching layer; each tool call queries upstream sources fresh.
Can I use this MCP server to scan an entire dependency tree automatically?
The server analyzes individual packages passed by name. To cover a full tree, generate a dependency list from your package manager (npm list --depth=0, pip freeze, cargo tree) and call dependency_risk_assessment for each direct dependency. Automate the batch via the HTTP API.
What does DO_NOT_USE mean for an OSS dependency verdict?
Either the composite score reached 75+, or the hard override triggered (CISA KEV entry + SINGLE_POINT bus factor simultaneously). In practice, this means the package has confirmed actively exploited vulnerabilities and a single maintainer who may not fix them. The recommendations array specifies whether to patch, replace, or migrate.
How is this different from Snyk or Dependabot for supply chain security? Snyk and Dependabot focus on CVE detection and automated PR generation. They do not score maintainer health, community viability, or SBOM regulatory exposure. This server provides the contextual intelligence those tools omit: a package with no open CVEs but a dead community and a single dormant maintainer represents a future risk that CVE databases will not surface.
Can I schedule open source supply chain monitoring to run daily?
Yes. Use Apify scheduling to call security_incident_monitor or dependency_risk_assessment on a cron schedule for your critical-path dependencies. Configure a webhook to notify Slack, PagerDuty, or email when vulnLevel escalates or a new CISA KEV entry appears.
Is it legal to use public CVE and GitHub data for commercial dependency auditing? Yes. NVD and CISA KEV are US government public databases explicitly provided for this purpose. GitHub's public API provides access to public repository metadata under their Terms of Service. StackExchange content is licensed under Creative Commons. Hacker News content is public. This server accesses all sources within their intended parameters.
Does the SBOM compliance score mean my organization is compliant with current regulations?
No. The sbomCompliance score reflects the density of SBOM-related regulatory activity in Federal Register and Congressional records, plus the package's technical readiness indicators (CI presence, permissive licensing). It is an analytical signal, not a compliance certification. Consult legal counsel for a determination of your specific SBOM obligations.
How long does a full dependency_risk_assessment call take?
Typically 30-90 seconds. The 7 upstream actors run in parallel with 120-second timeouts per source. Most sources respond in 15-40 seconds under normal load. Set your client timeout to at least 180 seconds to avoid premature connection drops.
What package ecosystems does OSS supply chain risk analysis support?
The server accepts any package name string. It works best for packages with significant GitHub presence: npm, PyPI, Cargo, Maven, Go modules, Ruby gems. Packages that exist only as commercial distributions with minimal GitHub activity will return lower-confidence scores. The ecosystem parameter on dependency_risk_assessment provides a search hint but does not restrict results.
Can I compare multiple packages in a single OSS supply chain assessment call?
The compare_package_risks tool accepts a package and optional alternative parameter and returns a risk profile for the primary package. For comparing more than two packages, make parallel calls to compare_package_risks and aggregate the compositeScore and verdict fields in your client.
Help us improve
If you encounter unexpected results or data quality issues, you can help debug faster by enabling run sharing:
- Go to Account Settings > Privacy
- Enable Share runs with public Actor creators
This lets us see run details when something goes wrong so we can fix issues faster. Your data is only visible to the actor developer, not publicly.
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom integrations or enterprise deployments with SLA requirements, reach out through the Apify platform.
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
Bulk Email Verifier
Verify email deliverability at scale. MX record validation, SMTP mailbox checks, disposable and role-based detection, catch-all flagging, and confidence scoring. No external API costs.
GitHub Repository Search
Search GitHub repositories by keyword, language, topic, stars, forks. Sort by stars, forks, or recently updated. Returns metadata, topics, license, owner info, URLs. Free API, optional token for higher limits.
Website Content to Markdown
Convert any website to clean Markdown for RAG pipelines, LLM training, and AI apps. Crawls pages, strips boilerplate, preserves headings, tables, and code blocks. GFM support.
Website Tech Stack Detector
Detect 100+ web technologies on any website. Identifies CMS, frameworks, analytics, marketing tools, chat widgets, CDNs, payment systems, hosting, and more. Batch-analyze multiple sites with version detection and confidence scoring.
Ready to try Open Source Software Supply Chain MCP Server?
Start for free on Apify. No credit card required.
Open on Apify Store