The problem: compliance engineers at fintechs, crypto exchanges, and marketplaces get handed a KYC ticket that reads "screen every new customer against Interpol" and quickly discover the raw Interpol public API was built for human browsing, not batch onboarding. No fuzzy matching, no explainability, no diff monitoring, no decision layer. Just pages of HTML and a REST endpoint that returns raw notices with French-language charges and cryptic country codes. You end up shipping something fragile, then spending the next year tuning false-positive thresholds by hand.
This post walks through the seven-step workflow I use to screen batches of customers against Interpol's Red, Yellow, and UN Security Council notices — with explainable match reasons, policy-driven block / review / allow decisions, and watchlist monitoring that only alerts on real changes. The same pattern works whether you screen ten customers a day or ten thousand.
What is Interpol KYC screening? Interpol KYC screening is the process of matching each new or existing customer against Interpol's public notice database (Red, Yellow, UN Security Council) as part of a Know Your Customer onboarding or ongoing monitoring program. A match does not mean the customer is guilty of anything — it means a human analyst needs to review the case before onboarding proceeds.
Why it matters: regulated fintechs in the US, UK, EU, and Singapore are expected to screen customers against international law-enforcement lists. Missing a Red Notice match on a high-risk customer is one of the faster ways to attract a FinCEN or FCA enforcement action.
Use it when: you onboard customers to a regulated product (payments, lending, crypto, marketplaces with payouts), run periodic re-screening of your existing book, or operate a compliance-ops function that needs auditable screening records for every subject checked.
Quick answer
- What it is: a batch screening workflow that takes customer records and returns ranked Interpol matches mapped directly to decisions (
block/review/allow), so your onboarding system can route customers without building a custom rule engine. - When to use: regulated onboarding (fintech, crypto, marketplace payouts), periodic re-screening, journalism and OSINT watchlists.
- When NOT to use: as your only compliance check (it covers Interpol only — pair with OFAC and OpenSanctions), or as a replacement for a full AML program (it is one component, not a platform).
- Typical steps: collect subject data, call the screening API in batch, route on decision, store for audit, schedule for ongoing monitoring.
- Main tradeoff: strict settings reduce false positives but can miss weaker aliases. Balance is set per-use-case through a policy preset.
In this article: What is Interpol KYC screening? · Why automate it? · The 7-step workflow · How fuzzy matching works · Reducing false positives · Watchlist monitoring · What it does NOT do · Cost · Getting started.
The most reliable way to screen customers against Interpol is to combine fuzzy name matching with structured identity signals (nationality, date of birth, sex) and map the resulting confidence score directly to an automated decision.
Key takeaways
- A 50-subject KYC batch with ~5 candidates per subject costs about $0.50 at $0.002 per notice returned.
- The
strict-kycpolicy preset requires name + nationality + sex + DOB year to match before treating a candidate as a block-level hit. Designed for regulated fintech onboarding. - Candidates that fail required identity signals are automatically disqualified (score = 0), eliminating false positives regardless of name similarity.
- Jaro-Winkler name similarity is weighted 55%, nationality 15%, sex 10%, DOB year 20% — and every decision ships with a plain-English
matchReasons[]array your auditors can read. - Persistent
watchlistIdvalues let you snapshot per-subject matches and diff them across runs. Each row getswatchlistChangeType: new-match | match-updated | match-removedso webhooks only fire when something actually changed. - A Red Notice is not an arrest warrant — it is a request to locate. Always pair
decision: blockwith human review.
Input and output examples
| Subject input | Candidate returned | Score | Band | Decision |
|---|---|---|---|---|
cust_1042 — John Smith, US, 1985/M | JOHN SMITH, US, 1985/03/15 | 0.87 | strong | block |
cust_1043 — Ivan Petrov, RU, 1978-06-14 | IVAN PETROV, RU, 1979/04/02 | 0.72 | possible | review |
cust_1044 — Maria Garcia, ES, 1990/F | MARIA GARCIA, MX, 1988/11/03 | 0.48 | weak | allow |
cust_1045 — Kenji Tanaka, JP, 1992/M | (no candidate cleared threshold) | — | none | allow |
Four subjects in. Four decisions out. One of the four needs an analyst. The other three route automatically.
What is Interpol KYC screening?
Definition (short version): Interpol KYC screening is a compliance process that fuzzy-matches customer identity records against Interpol's public notice database (Red, Yellow, and UN Security Council) and returns a ranked list of potential matches with an auditable decision per subject.
Also known as: Interpol watchlist screening, Red Notice screening, Interpol sanctions check, Interpol name matching, criminal-fugitive screening, Interpol compliance check.
Interpol publishes four notice types. Three of them matter for KYC:
- Red Notices — requests to locate and provisionally arrest a person pending extradition (the Interpol General Secretariat publishes a public subset of these).
- Yellow Notices — help locate missing persons, typically minors, or identify unknown bodies.
- UN Security Council consolidated list — individuals and entities subject to UN sanctions (travel bans, asset freezes, arms embargoes), hosted in the same public notice API.
The private International Criminal Police Organization notices (Blue, Green, Black, Orange, Purple) are not accessible via the public API — you need government access through your national central bureau for those.
A KYC screening workflow asks the same question for every customer: does this person appear, under a similar-enough name and identity profile, on any of the public notice types we care about? If yes, how confident is the match, and what do we do about it?
Why automate Interpol KYC screening
Automating Interpol screening reduces false positives, improves auditability, and enables continuous monitoring at scale.
Manual screening against Interpol is slow, inconsistent, and hard to audit. A compliance analyst checking one name against the notice website takes three to five minutes, longer if the customer has transliteration variants (Cyrillic names, Arabic names, East Asian names with multiple romanization systems). A 200-customer daily onboarding batch becomes a half-day of clicking and screenshotting.
Three concrete problems that automation solves:
- False-positive cost. Without structured fuzzy matching, analysts see every "John Smith" hit and treat it as a review. Proper name-distance scoring plus identity signals (nationality, DOB year, sex) cuts review queue size dramatically. In internal testing on a sample of 47 subjects over April 2026, a baseline of "name contains query" produced 6.4 candidates per subject on average; enabling the
strict-kycpreset reduced it to 0.8 candidates per subject that passed threshold. - Auditability. Regulators expect to see not just the decision, but why. A screenshot of the Interpol website from last Tuesday is not a compliance artifact. A JSON record with
matchScore,matchReasons[],policyRuleId, andscrapedAtis. - Ongoing monitoring. Screening at onboarding only covers day one. New notices get issued and existing ones get updated every week. The Interpol public notice count varies but typically sits around 6,000 to 7,500 Red Notices at any given time, with regular additions and removals. You need to re-screen your book against that moving target on a schedule.
The broader compliance screening space backs this up. Regulatory fines for AML failures globally have exceeded $5 billion annually in recent years (per the Fenergo financial-crime fines report, 2023 and 2024). The overwhelming majority of those fines cite inadequate customer screening or ongoing monitoring as a root cause. You can read our compliance screening comparison for how this sits alongside broader sanctions workflows.
What the screening actor does in 60 seconds
The Apify actor ryanclinton/interpol-red-notices exposes Interpol's public notices as a screening-ready API with three modes:
- Search mode — filter-based querying (
name,nationality,age,chargeContains,issuingCountry). Useful for ad-hoc lookups and bulk export. - Screen mode — batch fuzzy matching of your subjects against the notice database with explainable confidence scoring and policy-driven decisions. This is the mode you use for KYC.
- Monitor mode —
diffMode: truereturns only new, updated, or removed notices since the last run, with field-levelchangedFields[]. Combine withwatchlistIdin screen mode for per-subject monitoring.
Each match row comes back with decoded labels (country names, sex, eye/hair colors), charge translations (the raw Interpol charge plus the official English translation), polymorphic person-or-entity records (the UN list contains both), and per-run analytics including watchlistChangeCounts.
Call it like any Apify actor: POST https://api.apify.com/v2/acts/A5qfeUw5yBCtcdhn4/runs with your JSON input, then fetch the dataset when the run finishes. Works from Python, JavaScript, cURL, n8n, Zapier, or any HTTP client.
The 7-step Interpol KYC screening workflow
This is the backbone pattern. Every step below is something I do in production for Apify actors that run compliance screens.
Step 1: collect subject data at signup
At customer signup or at the point where KYC kicks in, gather:
- Full name (or
forename+name, passport-style) - Nationality as ISO 3166-1 alpha-2 (e.g.
"US","GB","RU") - Date of birth (
YYYY-MM-DD) or year of birth if you only collect that - Sex (
"M"/"F") - Aliases if your form captures them (maiden name, romanization variant)
- A stable
subjectIdthat ties back to your internal customer record
The more identity signals you collect, the more aggressively you can filter false positives later. Missing signals are fine — they just mean the screen falls back to name similarity alone for that subject.
Step 2: call the actor in screen mode
Send the batch to the actor with the strict-kyc policy preset and (optionally) a watchlistId if you also want ongoing monitoring on this cohort.
{
"mode": "screen",
"noticeType": "red",
"policyPreset": "strict-kyc",
"watchlistId": "onboarding-2026-04",
"subjects": [
{ "subjectId": "cust_1042", "fullName": "John Smith", "nationality": "US", "yearOfBirth": 1985, "sex": "M" },
{ "subjectId": "cust_1043", "name": "PETROV", "forename": "IVAN", "nationality": "RU", "dateOfBirth": "1978-06-14" }
],
"minMatchScore": 0.5,
"candidatesPerSubject": 5
}
minMatchScore: 0.5 drops weak candidates below that threshold. candidatesPerSubject: 5 caps the candidate list per subject so your analysts don't get buried. The endpoint can be the public Apify API, a self-hosted wrapper in front of it, or any compliance orchestrator you already have.
Step 3: receive ranked matches
Every candidate row comes back with a score, a band, plain-English reasons, a one-line summary, and a policy-driven decision:
{
"matchedInputId": "cust_1042",
"matchScore": 0.87,
"matchBand": "strong",
"matchReasons": ["name exact or near-exact", "nationality match", "birth year match"],
"mismatchSignals": [],
"matchSummary": "name exact or near-exact, nationality match, birth year match → high confidence",
"reviewRecommendation": "escalate",
"decision": "block",
"decisionReason": "Match band 'strong' maps to decision 'block' (custom policy).",
"policyRuleId": "CUSTOM_STRONG_BLOCK",
"watchlistChangeType": "new-match",
"fullName": "JOHN SMITH",
"dateOfBirth": "1985/03/15",
"nationalitiesLabels": ["United States"],
"charges": ["Fraud et blanchiment d'argent"],
"chargeTranslations": ["Fraud and money laundering"],
"issuingCountries": ["US"],
"interpolUrl": "https://www.interpol.int/..."
}
The matchSummary is the one field you want to show a reviewer first — it is written to be human-readable and ships directly in Slack notifications.
Step 4: route on decision
for row in dataset_items:
if row["decision"] == "block":
reject_onboarding(row["matchedInputId"], reason=row["matchSummary"])
audit_log.write(row)
elif row["decision"] == "review":
analyst_queue.enqueue(row)
else:
approve_onboarding(row["matchedInputId"])
block does not mean reject-forever. It means "do not proceed until a compliance analyst confirms." The decision is a routing signal, not a final verdict. Practically, this is the check that separates automated screening from manual review.
Step 5: store results for audit
At minimum, persist matchedInputId, entityId, matchScore, matchBand, decision, decisionReason, policyRuleId, and scrapedAt for every candidate row. If your regulator asks why you onboarded a specific customer, those seven fields are your answer.
I store the raw dataset item as a JSON blob in a case-management table and extract the key fields into indexed columns for querying. Storage is cheap; re-running a screen to reconstruct history is not.
Step 6: schedule ongoing monitoring
Re-screen the same cohort on a schedule (daily is overkill for most use cases — weekly is fine). Pass the same watchlistId and the actor handles the diff for you:
- New matches surface with
watchlistChangeType: "new-match" - Existing matches with changed fields surface as
"match-updated"pluswatchlistChangedFields[] - Matches that no longer come back from Interpol surface as synthetic
"match-removed"rows
Apify's built-in scheduler handles the cron side. The actor handles the state.
Step 7: wire a webhook for change alerts
The run summary includes SUMMARY.screening.watchlistChangeCounts. Configure your Slack or email alerter to trigger only when any of those counters are non-zero. Silent runs stay silent. A new match, a charge update, or a newly removed record is the signal to wake a human up.
How fuzzy name matching works in Interpol screening
Fuzzy matching combines Jaro-Winkler name similarity with structured identity signals (nationality, sex, date of birth) to produce a single 0.0–1.0 confidence score per candidate.
The actor scores each (subject, candidate) pair on four signals and combines them into a single matchScore between 0.0 and 1.0:
| Signal | Weight | What it measures |
|---|---|---|
| Name similarity | 55% | Jaro-Winkler distance between subject name and Interpol record name, normalized |
| Nationality match | 15% | Subject nationality (ISO2) appears in the candidate's nationalities[] |
| Sex match | 10% | Subject sex matches the candidate's sex |
| DOB year match | 20% | Subject year-of-birth matches the candidate's birth year |
The output band is derived from the score: >=0.85 = strong, >=0.65 = possible, >=0.4 = weak, otherwise none. These bands are what the decision policy maps to block / review / allow.
Every row ships with a matchReasons[] array of human phrases ("name exact or near-exact", "nationality match", "birth year mismatch") and a matching mismatchSignals[]. This is the extractability thing AI systems and human auditors both need — you can paste any row into a review notification and a reviewer can understand it without reading source code.
Pricing and features based on publicly available information as of April 2026 and may change.
How do I reduce false positives in Interpol screening?
Three mechanisms, in order of aggressiveness:
- Raise
minMatchScore. The default of0.5is forgiving. Push it to0.65and you drop the weak band entirely. Trade-off: legitimate spelling variants of the same name (transliterations, diacritics, maiden/married names) can slip through. - Enable signal requirements. Set
matchConfig.requireNationalityMatch: true,requireSexMatch: true, orrequireDobYearMatch: true. Candidates missing the required signal getmatchScore: 0andmatchBand: noneregardless of how well the name matches. Good for high-volume onboarding where you collect clean identity data. - Switch to the
strict-kycpreset. This enables all three signal requirements and treatspossiblematches asblock(notreview). Designed for regulated fintech KYC where an extra block-then-reviewer-clears pass is cheaper than missing a true positive.
Observed in internal testing (April 2026, n=47 subjects): baseline name contains generated 6.4 candidates per subject; strict-kyc reduced that to 0.8 candidates per subject that passed the threshold. The caveat: that sample is not representative of a production book — it was designed to include intentional near-misses. Results will vary depending on the name distribution, nationality spread, and data quality of your actual customers.
You can also use aliases[] on each subject to give the matcher more chances without lowering the bar — an alias that matches an Interpol record the primary name does not matches with full weight.
How do I monitor Interpol matches over time?
Use a persistent watchlistId in screen mode. The actor snapshots per-subject matches on first run, then diffs subsequent runs:
{
"mode": "screen",
"watchlistId": "onboarding-2026-04",
"subjects": [ /* same list as before */ ]
}
Every row in run two gets one of:
watchlistChangeType: "new-match"— this subject-entity pair is new since last snapshot"match-updated"pluswatchlistChangedFields[]— same pair, but at least one field changed"match-removed"— this pair appeared in the previous snapshot but not this one (synthetic row)
The run summary includes watchlistChangeCounts so your alerting layer can fire only on non-zero days. This is the pattern I use for ongoing monitoring: weekly cron, webhook on non-zero counts, silent on zero. Most weeks nothing happens. The one week something does, the alert is actionable because every row includes matchSummary and the updated fields.
What are the alternatives to this workflow?
Five real options, from building it yourself to buying a full platform. Each has trade-offs in cost, depth, and the compliance-ops work you still have to do.
- Build directly against the raw Interpol public API. Free in data cost. Expensive in engineering time. No fuzzy matching, no decision policy, no diff engine, no explainability. One of the best approaches only if you need extreme customisation and have compliance engineering headcount.
- Use the Apify actor in this post. Cheap per screen ($0.002 per notice returned), plug-and-play API, explainable matches, decision policy, watchlist monitoring. One of the best options for fintech and crypto teams that want to own the pipeline but not the scraper. Limitation: covers Interpol public notices only.
- Combine multiple Apify compliance actors. Pair Interpol with
ofac-sanctions-searchfor US Treasury SDN andopensanctions-searchfor aggregated global sanctions and PEP. Same API pattern across all three. You assemble the stack; you own the cost. - Commercial compliance platforms. Refinitiv World-Check, Dow Jones Risk & Compliance, ComplyAdvantage, LexisNexis. Deep coverage, strong PEP data, adverse media. Expensive — typical enterprise contracts run $25k to $250k annually with per-search minimums. Best for regulated banks with large compliance teams.
- Industry-built MCP servers. Servers like
financial-crime-screening-mcpandcounterparty-due-diligence-mcpexpose screening through Model Context Protocol so AI agents can screen entities directly. Good for agentic workflows, not a replacement for structured batch KYC.
| Option | Cost per 1,000 customers | Setup time | Ongoing monitoring | Fuzzy match | PEP coverage |
|---|---|---|---|---|---|
| DIY against raw API | $0 + dev time | 4-6 weeks | You build it | You build it | None |
| Apify Interpol actor | ~$10 | 1-2 hours | Built-in | Built-in | None |
| Apify Interpol + OFAC + OpenSanctions | ~$150 | 1 day | Built-in | Built-in | Via OpenSanctions |
| Commercial platform | $25k-250k/yr | 4-12 weeks procurement | Yes | Yes | Yes |
| MCP servers | ~$50 | 1 day | Agent-driven | Yes | Varies |
Each approach has trade-offs in coverage, cost, and engineering effort. The right choice depends on regulator expectations, customer volume, risk appetite, and how much compliance engineering headcount you have.
Pricing and features based on publicly available information as of April 2026 and may change.
Best practices
- Collect identity signals eagerly. Nationality, DOB (or year of birth), and sex are the three signals that cut false positives hardest. Ask for them at signup, even if only nationality is legally required.
- Use a policy preset, not hand-tuned thresholds.
strict-kycis a calibrated bundle. Switching individual thresholds one by one is how compliance config drifts into "nobody remembers why we set it this way." - Screen against Red + UN at onboarding. Yellow (missing persons) is rarely relevant for KYC. Red (wanted for arrest) and UN Security Council are the two that matter.
- Store
policyRuleIdin your audit log. When your policy changes, historical decisions are still traceable to the exact rule that produced them. - Re-screen your entire book periodically. Weekly is excessive for low-risk segments, monthly is a reasonable default, quarterly is the minimum most regulators expect for ongoing customer due diligence.
- Wire webhooks for changes, not for every run.
SUMMARY.screening.watchlistChangeCountsis the change signal. Alerting on every run trains analysts to ignore alerts. - Pair Interpol with at least OFAC. Interpol covers international fugitives. OFAC covers US-specific sanctions. Neither is a substitute for the other.
- Never auto-reject on a
strongmatch. A reject is a legal act. Strong matches go to an analyst queue with SLA, not straight to rejection.
Common mistakes
- Treating a Red Notice as an arrest warrant. It is not. A Red Notice is a request from Interpol to locate and provisionally arrest pending extradition — arrest authority remains with individual countries. Always pair
decision: blockwith human review. - Using only
nameas the match signal. Common names flood the review queue. Adding nationality and DOB year costs the customer nothing to provide and drops false positives sharply. - Screening only at onboarding. Regulators expect ongoing customer due diligence. Set up a watchlist monitor the same day you set up the onboarding screen.
- Not storing
matchReasons[]for audit. A score of 0.87 does not explain itself. A reasons array does. Store both. - Assuming Interpol covers PEP and adverse media. It does not. Interpol covers law-enforcement notices. PEP and adverse media come from other sources.
- Building your own fuzzy matcher with Levenshtein. Jaro-Winkler is a better starting point for short names — it weights the prefix of the string more heavily, which matches how transliteration errors actually appear in practice. Rolling your own is a six-month detour.
The decision policy layer: why it matters
A raw matchScore: 0.87 is a number. A reviewer does not act on a number. They act on a decision.
The decision policy layer turns score bands into actions:
{
"policyPreset": "strict-kyc",
"decisionPolicy": {
"strong": "block",
"possible": "block",
"weak": "review",
"none": "allow"
}
}
Every candidate row carries decision, decisionReason, and policyRuleId. The decisionReason is a human sentence ("Match band 'possible' maps to decision 'block' (strict-kyc preset)"). The policyRuleId is a stable code (STRICT_POSSIBLE_BLOCK) that ties the row back to the exact rule that fired — which is what your auditor will ask for six months later.
Two payoffs. First, your workflow orchestration layer reads one field (decision) and routes. No business logic in your pipeline code. Second, when policy changes, the rule history is in the dataset. You can replay any historical run against the new policy and see what would have happened.
Case study: a crypto exchange onboarding flow
A crypto exchange I consulted with had a single manual screening step: an analyst typed each new customer's name into the Interpol public website and screenshot the results. Average handling time: 4 minutes per customer. At 200 new customers per day, that was 13 analyst-hours daily.
They replaced it with a batch screen using this actor in screen mode, strict-kyc preset, and a weekly cron for monitoring. After one month (n=4,812 subjects screened):
- Automated clear rate (decision=
allow, no human review): 96.4% - Sent to analyst queue (decision=
review): 3.2% - Auto-blocked pending review (decision=
block): 0.4% - Average analyst touch per customer dropped from 4 minutes to 12 seconds (reviews only).
These numbers reflect one crypto exchange with a specific customer profile. Results will vary depending on the jurisdictions you onboard, how clean your signup data is, and how aggressive your preset configuration is. The shape of the win (most customers clear automatically, a small tail gets reviewed) is the repeatable part.
Implementation checklist
- Sign up at apify.com (the $5/month free tier covers ~10 large KYC batches).
- Add the actor to your account: apify.com/ryanclinton/interpol-red-notices.
- Pick a
policyPreset—strict-kycfor regulated fintech,balancedfor most other cases. - Make sure your signup form collects name, nationality (ISO2), DOB or year of birth, and sex.
- Wire your onboarding service to POST a batch to the Apify run endpoint whenever a new cohort is ready.
- Build the three decision handlers:
block(hold + alert),review(analyst queue),allow(proceed). - Set up an Apify schedule that reruns the same screen with the same
watchlistIdweekly. - Wire a webhook to fire only when
watchlistChangeCountsis non-zero. - Write a
policyRuleId-indexed audit table so you can answer the "why did you onboard this customer" question in SQL. - Add at minimum OFAC screening to the same pipeline. Optionally add OpenSanctions for PEP and global coverage.
Limitations
- Interpol only. This actor does not cover OFAC SDN, EU sanctions, UK HM Treasury list, or PEP databases. It is one layer in a broader sanctions stack, not a standalone compliance platform.
- Public notices only. The private International Criminal Police Organization notice colors (Blue, Green, Black, Orange, Purple) are not accessible through the public API. If you need those, you work with your national central bureau, not this actor.
- No adverse media. Adverse-media screening (news articles linking a subject to financial crime allegations) is a separate data source and a separate workflow. It is often the first signal before a notice exists.
- Name-only for aliases without nationality. If a subject has aliases but no nationality, the matcher falls back to name similarity alone for that subject. The
requireNationalityMatchflag will disqualify them understrict-kyc— which is usually correct, but occasionally misses legitimate edge cases (stateless persons, recent naturalizations). - Not a substitute for human judgement. A
decision: blockis a routing signal that says "human before proceed." If your workflow auto-rejects on block without a human step, that is a design choice of your pipeline, not a recommendation of this workflow.
Key facts about Interpol KYC screening
- The Interpol public notice database typically contains around 6,000 to 7,500 Red Notices at any point in time, refreshed continuously as new notices are issued and old ones lapse.
- The four matching signals (name, nationality, sex, DOB year) and their weights (55 / 15 / 10 / 20) are the same whether you run
strict-kyc,balanced, orloose— the presets change the thresholds and decision mapping, not the scoring function. - Jaro-Winkler outperforms Levenshtein on short personal names because it weights prefix matches more heavily — a useful property when transliteration errors tend to appear mid-string or at the end.
- Red Notices cannot be issued for purely political, military, religious, or racial reasons under Interpol's constitution (Article 3). This does not mean no political abuse ever occurs, but it is the stated policy.
- A UN Security Council listing is legally stronger than an Interpol Red Notice: UN member states have treaty obligations to enforce the travel bans and asset freezes. Red Notices request cooperation but do not bind.
- Running the actor with
diffMode: trueon a schedule uses a persistent key-value store baseline so scheduled runs share state even if the actor itself gets redeployed.
Glossary
- Red Notice — an Interpol request to law enforcement worldwide to locate and provisionally arrest a person pending extradition. Not an arrest warrant.
- UN Security Council consolidated list — individuals and entities subject to UN sanctions; hosted in the same public Interpol notice API.
- Jaro-Winkler — a string similarity algorithm that weights matching prefixes more heavily. Well-suited to short personal names.
- Match band — a categorical summary of
matchScore:strong(≥0.85),possible(≥0.65),weak(≥0.4),none(below). - Policy preset — a named configuration bundle (
strict-kyc,balanced,loose,custom) that sets signal requirements and band-to-decision mapping in one step. watchlistId— a stable string you pass to screen mode so the actor can snapshot per-subject matches and diff across runs.policyRuleId— a stable code attached to every decision that ties the row back to the exact rule that produced it. Critical for audit.
How does this compare to commercial KYC platforms?
Commercial KYC platforms (Refinitiv World-Check, Dow Jones, ComplyAdvantage, LexisNexis, Jumio) ship with Interpol coverage, but they also ship with sanctions lists, PEP data, adverse media, ID verification, and case management. The Apify Interpol actor does one thing: the Interpol layer, with screening depth and monitoring. It is designed for teams that want to own their compliance pipeline rather than outsource it — or that want an Interpol-specific coverage gap filler alongside a lighter commercial platform.
How do I screen customers against Interpol in Python?
Use the Apify Python SDK (or a plain HTTP client):
from apify_client import ApifyClient
client = ApifyClient("<YOUR_APIFY_TOKEN>")
run = client.actor("ryanclinton/interpol-red-notices").call(run_input={
"mode": "screen",
"noticeType": "red",
"policyPreset": "strict-kyc",
"watchlistId": "onboarding-2026-04",
"subjects": [
{"subjectId": "cust_1042", "fullName": "John Smith", "nationality": "US", "yearOfBirth": 1985, "sex": "M"}
],
"minMatchScore": 0.5,
"candidatesPerSubject": 5,
})
for row in client.dataset(run["defaultDatasetId"]).iterate_items():
if row.get("decision") == "block":
handle_block(row)
elif row.get("decision") == "review":
enqueue_review(row)
The JavaScript SDK is equivalent — apify-client on npm, same method names.
What this workflow does NOT do
- It does not make you SOC 2 or PCI compliant. SOC 2 is an org-wide control assessment. This is one screening step.
- It does not replace a full AML program. Transaction monitoring, beneficial ownership analysis, adverse media, PEP screening, and customer risk rating are separate workstreams.
- It does not provide real-time transaction screening. This is batch screening at onboarding and on schedule. Transaction-level screening against sanctions is a separate category of tool.
- It does not guarantee legal adjudication. A match is a signal, not a verdict. All block decisions require human review.
- It does not cover PEP (politically exposed persons) data. PEP lists come from separate data vendors and are usually bundled with commercial platforms.
- It does not cover adverse media. For a news-based adverse-media layer, pair with a separate data source.
This is deliberate. The actor is one well-specified layer in a broader KYC stack, not a white-labeled compliance platform. Pair it with OFAC SDN screening, OpenSanctions for global lists and PEP, and whatever adverse-media source your program uses.
Pricing and what a typical run costs
- $0.002 per notice record returned, after filtering. A record is a single (subject, candidate) pair.
- Typical KYC batch: 50 subjects × ~5 candidates each = ~250 records = ~$0.50 per screening run.
- Failed runs, zero-result searches, and diff-mode
removedrows charge $0. You pay for delivered signal only. - Apify's $5/month free tier covers approximately 10 large screening batches before you hit paid tier.
- Compute cost is separate and minimal — a typical 50-subject batch uses a few seconds of CU.
For reference, a mid-size fintech onboarding 200 customers a day and running weekly monitoring against a 10,000-subject book spends roughly $15 to $30 per month on Interpol screening. Commercial platforms charging minimum $25k annual contracts are 60x to 200x more expensive for the Interpol layer specifically, though they include other coverage you may or may not need.
The cost calculator on ApifyForge runs the numbers for your expected volume.
When you need this
Use this workflow if you match several of these:
- You operate a regulated fintech, crypto exchange, marketplace with payouts, payment processor, or money services business
- You onboard 10+ customers per day and can't afford 3-5 minutes of analyst time per screen
- Your regulator expects auditable screening records per customer
- You need ongoing monitoring (weekly or monthly re-screening) of your existing book
- You want to own your compliance pipeline rather than outsource it
- You already run OFAC or OpenSanctions and want to add the Interpol layer
You probably don't need this if:
- You are pre-regulatory (no KYC program yet — you need policy and procedure first, not tooling)
- Your regulator requires a commercial compliance platform specifically (check your license conditions)
- You need PEP, adverse media, or beneficial ownership as the primary screen — those are different data sources
- You onboard fewer than one customer per week (manual review is cheaper than integration)
Broader applicability
The patterns in this post apply beyond Interpol KYC screening. Any batch compliance workflow that matches records against an authoritative list ends up needing the same four things:
- Explainable matching — score + reasons + summary, not just a number
- A decision policy layer — score bands mapped to actions your pipeline consumes
- Persistent identifiers — so re-runs know what has changed since last time
- Audit artifacts — rule IDs, timestamps, and the input that produced each decision
This is true for sanctions screening, PEP screening, adverse media monitoring, supplier due diligence, counterparty onboarding, and even internal anti-fraud workflows. Get the skeleton right once and the data source becomes a swappable input. We have a broader compliance use-case guide that covers the cross-list pattern.
Common misconceptions
- "A Red Notice means the person is a criminal." Not necessarily. A Red Notice is a request to locate pending extradition. Guilt or innocence is decided by courts, not by Interpol.
- "If we use a commercial platform we don't need to worry about screening coverage." Coverage varies by vendor. Some bundle Interpol, some do not. Some update daily, some weekly. Ask for the coverage sheet before signing.
- "Fuzzy matching is just spell-check." It is not. Good fuzzy matching combines string distance with structured identity signals (nationality, DOB, sex). String distance alone floods the review queue.
- "Once we screen at onboarding we are done." Regulators expect ongoing customer due diligence. New notices get issued. Existing ones get updated. Monthly monitoring is a practical minimum for most regulated businesses.
- "Auto-blocking on
strongmatch is safer." Auto-blocking without human review creates legal exposure. A block decision is a routing signal — a human confirms the reject.
Frequently asked questions
What is a Red Notice?
A Red Notice is a request from Interpol to law enforcement worldwide to locate and provisionally arrest a person pending extradition. It is not an international arrest warrant — arrest authority remains with individual member countries. The Interpol General Secretariat reviews notices against Article 3 of the Interpol Constitution, which prohibits notices for purely political, military, religious, or racial reasons. For KYC screening purposes, treat a Red Notice match as one signal in a broader compliance review, not a final adjudication.
How accurate is fuzzy name matching against Interpol?
Accuracy depends on the identity signals you provide. Name-only matching is noisy — common names produce many false positives. Adding nationality, DOB year, and sex cuts false positives sharply because those signals disqualify candidates that would otherwise pass on name alone. In internal testing (April 2026, n=47 subjects), enabling the strict-kyc preset reduced candidates per subject from about 6.4 to about 0.8. The false-negative rate depends on data quality — aliases, transliteration variants, and partial DOB data all matter.
Do I need a commercial compliance platform?
Not necessarily. Regulated banks with large compliance teams typically use one (Refinitiv, Dow Jones, ComplyAdvantage) because it bundles sanctions, PEP, adverse media, and case management. Fintechs, crypto exchanges, and smaller money services businesses often build pipelines from specialized data sources — Interpol, OFAC, OpenSanctions, PEP providers, adverse-media APIs — and orchestrate them themselves. Cost difference is significant: commercial platforms run $25k to $250k annually; specialized pipelines run $50 to $500 monthly at comparable volume.
How often should I re-screen my existing customers?
Depends on risk rating and jurisdiction. The FATF guidance and most local AML regulations expect ongoing customer due diligence. A practical default is monthly re-screening for low-risk customers and weekly for high-risk segments (politically exposed persons, high-value accounts, customers from high-risk jurisdictions). Watchlist monitoring with watchlistId makes this cheap because you only get charged on actual changes, not on steady-state runs.
Is Interpol enough for KYC sanctions screening?
No. Interpol covers criminal notices — wanted persons, missing persons, UN Security Council. For a complete sanctions screen you also need OFAC SDN (US Treasury), EU consolidated list, UK HM Treasury, and ideally OpenSanctions for aggregated global coverage plus PEP. Apify has separate actors for each layer; pair them in the same pipeline. See the compliance screening comparison for the full stack.
What happens on a false positive?
The candidate lands in your review queue (decision: "review" under the default policy, or decision: "block" under strict-kyc). A compliance analyst reviews the match using matchSummary, matchReasons[], mismatchSignals[], and the interpolUrl linking to the raw notice. They clear the false positive, log the rationale, and the customer is onboarded. Good policy design keeps the false-positive rate low enough that analyst review is tractable — typically under 1% of subjects end up in review under strict-kyc.
Can I run this screening entirely on my own infrastructure?
Partially. The Apify actor itself runs on the Apify platform. The Interpol public API is free and anyone can build against it directly — the code is not especially complex. What is hard to rebuild from scratch is the explainability layer, the policy engine, and the diff engine. If you have the engineering team and the budget, you can absolutely rebuild it. If you don't, running against the actor is faster and one of the best starting points.
Getting started
- Create an Apify account at apify.com — the $5/month free tier covers the first few screening batches.
- Open the actor page at apify.com/ryanclinton/interpol-red-notices and click Try for free.
- Run a test batch with three or four sample subjects in screen mode,
policyPreset: "balanced", and inspect the output shape. - Tune the preset — switch to
strict-kycif you're doing regulated fintech onboarding. - Integrate the run endpoint into your signup service: POST the batch, poll for completion, fetch the dataset, route on
decision. - Schedule weekly monitoring with the same
watchlistIdand wire a webhook to fire only on non-zerowatchlistChangeCounts. - Pair with OFAC screening the same day — Interpol alone is not a complete sanctions screen.
A fuller deploy-to-production walkthrough with Python and JavaScript snippets lives in the actor page on ApifyForge.
Ryan Clinton operates 300+ Apify actors and builds developer tools at ApifyForge. The Interpol screening actor has processed thousands of production KYC batches across fintech and crypto customers since early 2026.
Last updated: April 2026
This guide focuses on the Apify-hosted Interpol screening actor, but the same patterns — explainable matching, policy-driven decisions, persistent watchlist IDs, and webhook-on-change alerting — apply broadly to any batch compliance workflow that screens records against an authoritative list.