Website Change Monitor & Diff Tracker
Track content changes across any number of websites automatically. Compares pages against stored snapshots using SHA-256 hashing and reports exactly which lines were added or removed. Supports webhook and Slack notifications.
Maintenance Pulse
97/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| site-monitored | Charged per website monitored for changes. | $0.10 |
Example: 100 events = $10.00 · 1,000 events = $100.00
Documentation
Track content changes across any number of websites automatically. Website Change Monitor fetches web pages, compares them against previously stored snapshots using SHA-256 hashing, and reports exactly which lines of text were added or removed. Schedule it to run hourly, daily, or weekly from the Apify console to build a continuous monitoring pipeline -- no code required.
Why use Website Change Monitor?
Websites change constantly -- competitors update pricing, regulators revise policies, partners alter terms, and documentation evolves without notice. Manually checking dozens of pages is tedious and unreliable. Missing a critical change can mean lost revenue, compliance violations, or broken integrations that go undetected for days or weeks.
Website Change Monitor solves this by automating the entire detection pipeline. It fetches each URL, extracts content using your preferred comparison mode, hashes the result for instant duplicate detection, and produces a line-by-line diff when changes occur. Combined with Apify's scheduling and integration ecosystem, you get a fully automated alerting system that watches your most important pages around the clock and notifies you the moment something changes.
Key features
- SHA-256 content hashing -- generates a cryptographic hash of each page's content for instant, reliable change detection without expensive full-text comparisons.
- Three comparison modes -- choose between visible text extraction (strips scripts, styles, and tags), full raw HTML comparison, or CSS selector targeting to monitor a specific page section.
- Line-by-line diff reports -- when changes are detected, see precisely which lines were added and which were removed between the previous and current snapshots.
- Persistent snapshot storage -- use a named Key-Value store so snapshots survive across scheduled runs, building a continuous monitoring timeline.
- Change-only output filtering -- suppress unchanged URLs from the dataset so you only receive items with status
changed,new, orerror. - First-run baseline capture -- on the initial run for any URL, the actor captures and stores a baseline snapshot so change detection begins on the next run.
- Multi-URL batch processing -- pass a list of URLs and monitor all of them in a single run, with per-URL status tracking and independent error handling.
- Robust error handling -- network failures, timeouts, and HTTP errors are caught gracefully and reported as error-status items instead of crashing the entire run.
- 30-second fetch timeout -- each URL is fetched with an AbortController-based timeout to prevent slow or unresponsive pages from blocking the entire batch.
- Truncated output snapshots -- content is capped at 1,000 characters in the dataset output for readability, while full snapshots are stored in the KV store for accurate diffing.
- Webhook notifications -- send change alerts to any HTTP endpoint as a JSON POST when changes are detected.
- Slack notifications -- send formatted Slack messages with change summaries, diff previews, and direct links using Slack Block Kit.
How to use Website Change Monitor
Using Apify Console
- Navigate to the actor page -- go to Website Change Monitor on Apify and click "Try for free" or "Start."
- Enter your URLs -- add one or more website URLs in the "URLs to Monitor" field. You can paste them one per line or use the string list editor.
- Select a comparison mode -- choose "Text" for clean visible-text comparison (recommended), "HTML" for full markup comparison, or "CSS Selector" to target a specific page section like
.pricingor#main-content. - Set a named KV Store -- enter a name in the "KV Store Name" field (e.g.,
my-website-snapshots) so snapshots persist between scheduled runs. Without this, snapshots are lost after each run and every execution will report all URLs as "new." - Run or schedule -- click "Start" for a one-off run, or configure a schedule (hourly, daily, weekly) from the Apify console to enable continuous monitoring.
Using the API
You can trigger the actor programmatically via the Apify API. Send a POST request to the actor's run endpoint with your input JSON in the request body. See the API and Integration section below for code examples in Python, JavaScript, and cURL.
Input parameters
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
urls | string[] | Yes | -- | List of website URLs to monitor for content changes. |
mode | string | No | "text" | Comparison mode: text (visible text only), html (full HTML markup), or selector (CSS selector targeting). |
cssSelector | string | No | -- | CSS selector to target a specific page section (e.g., .pricing, #main-content). Only used when mode is selector. |
notifyOnlyChanges | boolean | No | true | If true, only output URLs with status changed, new, or error. If false, output all monitored URLs. |
kvStoreName | string | No | -- | Named Key-Value store for persistent snapshot storage across runs. If empty, uses the default actor KV store. |
webhookUrl | string | No | -- | HTTP endpoint to POST change notifications to. Receives JSON payload with url, changeType, diffSummary, timestamp, and snapshots when changes are detected. |
slackWebhookUrl | string | No | -- | Slack incoming webhook URL. Sends a formatted Slack message with change details, diff preview, and direct link when changes are detected. |
Example input
{
"urls": [
"https://competitor.example.com/pricing",
"https://partner.example.com/terms",
"https://docs.example.com/api/changelog"
],
"mode": "text",
"notifyOnlyChanges": true,
"kvStoreName": "my-website-snapshots"
}
Tips for input
- Always set a named KV Store for scheduled runs. Without it, the default KV store is ephemeral and every run reports all URLs as "new."
- Use CSS selector mode to reduce noise from ads, timestamps, or dynamic content. Target
.main-content,#article-body, or similar stable selectors. - Start with text mode unless you need to detect HTML structural changes. Text mode strips scripts, styles, SVG, and non-visible elements for cleaner comparisons.
- Split very large URL lists (500+) across multiple scheduled runs to stay within timeout limits.
Output
Each item in the output dataset represents one monitored URL. Here is an example showing a detected change:
{
"url": "https://competitor.example.com/pricing",
"status": "changed",
"previousSnapshot": "Basic Plan $9/mo\nPro Plan $29/mo\nEnterprise Plan $99/mo\nAll plans include 24/7 support",
"currentSnapshot": "Basic Plan $12/mo\nPro Plan $35/mo\nEnterprise Plan $99/mo\nAll plans include 24/7 support\nNew: Startup Plan $19/mo",
"changes": {
"addedText": [
"Basic Plan $12/mo",
"Pro Plan $35/mo",
"New: Startup Plan $19/mo"
],
"removedText": [
"Basic Plan $9/mo",
"Pro Plan $29/mo"
],
"summary": "3 line(s) added, 2 line(s) removed"
},
"lastChecked": "2026-02-17T10:00:00.000Z",
"previousChecked": "2026-02-16T10:00:00.000Z",
"contentHash": "a3f2b8c1d9e4f5a6b7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7e8f9a0"
}
A newly tracked URL on first run:
{
"url": "https://docs.example.com/api/changelog",
"status": "new",
"previousSnapshot": null,
"currentSnapshot": "API Changelog\nv3.2.0 -- Added batch endpoint\nv3.1.1 -- Fixed rate limiting...",
"changes": {
"addedText": [],
"removedText": [],
"summary": "First snapshot captured. Changes will be reported on next run."
},
"lastChecked": "2026-02-17T10:00:00.000Z",
"previousChecked": null,
"contentHash": "b4c3d2e1f0a9b8c7d6e5f4a3b2c1d0e9f8a7b6c5d4e3f2a1b0c9d8e7f6a5b4c3"
}
An error result when a page is unreachable:
{
"url": "https://down-site.example.com",
"status": "error",
"previousSnapshot": null,
"currentSnapshot": "",
"changes": null,
"lastChecked": "2026-02-17T10:00:00.000Z",
"previousChecked": null,
"contentHash": "",
"errorMessage": "HTTP 503 Service Unavailable"
}
Output fields
| Field | Type | Description |
|---|---|---|
url | string | The monitored URL. |
status | string | One of changed, unchanged, new, or error. |
previousSnapshot | string or null | Truncated content from the previous run (null if first run or error). |
currentSnapshot | string | Truncated content from the current run (empty string on error). |
changes | object or null | Contains addedText array, removedText array, and summary string. Null on error. |
lastChecked | string | ISO 8601 timestamp of the current check. |
previousChecked | string or null | ISO 8601 timestamp of the previous check (null if first run). |
contentHash | string | SHA-256 hash of the current content (empty on error). |
errorMessage | string | Present only when status is error. Describes the failure reason. |
Use cases
- Competitor pricing intelligence -- monitor competitor pricing pages to detect price changes, new tiers, or removed plans before they affect your market positioning.
- Compliance and legal monitoring -- track terms-of-service, privacy policy, and regulatory notice pages for changes that may require legal review or policy updates.
- SEO and content tracking -- watch competitor landing pages, meta content, and blog posts for changes that could impact your search strategy.
- API documentation monitoring -- detect changelog updates, deprecation notices, or breaking changes in third-party API docs before they cause production issues.
- Brand protection -- monitor pages that reference your brand to detect unauthorized modifications, counterfeit listings, or partner compliance violations.
- E-commerce inventory and availability -- track product pages for stock status changes, new product launches, or description updates.
- Government and regulatory watch -- monitor federal registers, agency announcements, or grant program pages for new postings and policy revisions.
- Deployment verification -- confirm your own website deployments went live correctly by comparing pre-deployment and post-deployment snapshots.
- News and media monitoring -- track press release pages, newsrooms, or publication landing pages for new content as it appears.
- Academic and research tracking -- watch institutional pages, journal tables of contents, or conference sites for new publications and calls for papers.
API & Integration
Python
from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run_input = {
"urls": [
"https://competitor.example.com/pricing",
"https://partner.example.com/terms"
],
"mode": "text",
"notifyOnlyChanges": True,
"kvStoreName": "my-website-snapshots"
}
run = client.actor("qcxKU2ReRjP5NmlZR").call(run_input=run_input)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"{item['url']} -- {item['status']}")
if item["status"] == "changed":
print(f" Summary: {item['changes']['summary']}")
JavaScript
import { ApifyClient } from "apify-client";
const client = new ApifyClient({ token: "YOUR_API_TOKEN" });
const run = await client.actor("qcxKU2ReRjP5NmlZR").call({
urls: [
"https://competitor.example.com/pricing",
"https://partner.example.com/terms"
],
mode: "text",
notifyOnlyChanges: true,
kvStoreName: "my-website-snapshots"
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
console.log(`${item.url} -- ${item.status}`);
if (item.status === "changed") {
console.log(` Summary: ${item.changes.summary}`);
}
});
cURL
curl -X POST "https://api.apify.com/v2/acts/qcxKU2ReRjP5NmlZR/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"urls": ["https://competitor.example.com/pricing"],
"mode": "text",
"notifyOnlyChanges": true,
"kvStoreName": "my-website-snapshots"
}'
Integrations
Website Change Monitor works with the Apify platform's built-in integrations:
- Webhooks -- trigger an HTTP POST to your endpoint whenever the actor finishes, sending change data to your backend.
- Zapier -- connect the output to 5,000+ apps for email, Slack, SMS, or spreadsheet alerts.
- Make (Integromat) -- build multi-step automation workflows that filter by status and route alerts.
- Google Sheets -- export change reports automatically for historical tracking and analysis.
- Slack / Microsoft Teams -- receive instant notifications in your team channel when pages change.
- Email notifications -- use Apify's built-in email integration for inbox alerts.
- Apify API -- programmatically trigger runs, retrieve datasets, and manage schedules.
How it works
Website Change Monitor follows a sequential pipeline for each URL in the input list:
- Fetch -- the actor sends an HTTP GET request to each URL with a 30-second timeout using AbortController and a custom
ApifyBot/1.0User-Agent header. - Extract -- based on the selected mode, content is extracted:
textmode uses Cheerio to strip<script>,<style>,<noscript>,<svg>, and<head>elements then normalizes whitespace;htmlmode uses the raw response;selectormode uses Cheerio to extract text from elements matching the CSS selector. - Hash -- the extracted content is passed through SHA-256 (
crypto.createHash) to produce a 64-character hex digest. - Compare -- the hash is compared against the previously stored hash in the Key-Value store. If no previous snapshot exists, the URL is marked as
new. - Diff -- when hashes differ, a line-by-line Set-based comparison identifies lines present in the old snapshot but missing from the new (removed) and lines present in the new but missing from the old (added).
- Store -- the current content, hash, and timestamp are saved to the Key-Value store under the key
snapshot_{sha256(url)}for comparison on the next run. - Output -- results are filtered based on the
notifyOnlyChangessetting and pushed to the dataset.
URL List
|
v
[Fetch HTML] --error--> { status: "error" }
|
v
[Extract Content]
| text: strip tags, normalize whitespace
| html: raw markup
| selector: CSS target extraction
|
v
[SHA-256 Hash]
|
v
[Load Previous Snapshot from KV Store]
|
+--> No previous? --> { status: "new" } --> [Store Snapshot]
|
+--> Hash matches? --> { status: "unchanged" }
|
+--> Hash differs? --> [Line-by-Line Diff] --> { status: "changed" } --> [Store Snapshot]
|
v
[Filter by notifyOnlyChanges]
|
v
[Push to Dataset]
Performance & cost
Website Change Monitor runs on the Apify platform. You receive $5 of free platform credits monthly, which covers substantial monitoring workloads.
| Scenario | URLs | Frequency | Est. Monthly Cost |
|---|---|---|---|
| Light monitoring | 5 URLs | Daily | ~$0.50 |
| Standard monitoring | 20 URLs | Daily | ~$1.50 |
| Frequent checks | 20 URLs | Every 6 hours | ~$5.00 |
| Heavy monitoring | 100 URLs | Daily | ~$7.00 |
| Enterprise | 100 URLs | Hourly | ~$40.00 |
The actor uses minimal memory (256 MB) and completes quickly since it only fetches and compares text content. Actual costs depend on page size and number of URLs. One actor compute unit (1 CU) costs approximately $0.25 at 256 MB memory.
Limitations
- No JavaScript rendering -- the actor fetches raw HTML via HTTP GET. Pages that rely on client-side JavaScript to render content will not be monitored accurately. Use a browser-based scraper for single-page applications.
- No authentication support -- the actor cannot log in or pass session tokens. Only publicly accessible pages can be monitored.
- Sequential processing -- URLs are processed one at a time with a 30-second timeout each. Very large batches (500+ URLs) may require splitting across multiple runs.
- Line-level granularity only -- the diff algorithm operates on whole lines using Set comparison. It does not detect character-level or word-level changes within a line.
- Dynamic content noise -- pages with timestamps, session IDs, ad rotations, or A/B test variants may trigger false positives. Use CSS selector mode to target stable page sections.
- Snapshot truncation in output -- dataset output truncates snapshots to 1,000 characters for readability. Full content is stored in the KV store but is not directly visible in dataset items.
- Single User-Agent -- all requests use the
ApifyBot/1.0User-Agent. Some websites may block or rate-limit this agent string.
Responsible use
- Respect robots.txt -- check that the websites you monitor allow automated access. Some sites explicitly block bots in their robots.txt or terms of service.
- Set reasonable frequencies -- avoid scheduling checks every few minutes unless necessary. Hourly or daily checks are sufficient for most monitoring use cases and reduce load on target servers.
- Monitor only public content -- do not attempt to use this actor to access paywalled, authenticated, or restricted content. It is designed for publicly available web pages.
- Comply with local laws -- ensure your monitoring activities comply with applicable data protection regulations, computer access laws, and website terms of service in your jurisdiction.
- Avoid excessive load -- when monitoring large URL lists, consider staggering runs or splitting lists to prevent sending too many requests to a single domain in a short period.
FAQ
How does the actor detect changes? It fetches each URL, extracts content based on your chosen mode (text, HTML, or CSS selector), generates a SHA-256 hash of the extracted content, and compares it against the hash stored from the previous run. If the hashes differ, it performs a line-by-line diff to identify exactly what was added and removed.
What happens on the first run?
On the first run for any URL, the actor captures a baseline snapshot and stores it in the Key-Value store. The URL is reported with status new and the summary reads "First snapshot captured. Changes will be reported on next run."
Do I need a named KV Store? For one-off runs, the default store works fine. For scheduled or recurring runs, you should always set a named KV Store so snapshots persist between runs. Without it, every run treats every URL as new.
Can I monitor pages behind a login? No. The actor makes unauthenticated HTTP GET requests only. It works with publicly accessible web pages.
What if a page returns an error?
The actor reports it with status error and includes the error message (e.g., "HTTP 503 Service Unavailable" or "Fetch failed: The operation was aborted"). Other URLs in the same batch continue processing normally.
How do I reduce false positives from dynamic content?
Use CSS selector mode to target only the stable section of the page you care about, such as .pricing-table, #terms-content, or article.main. This avoids false alerts from ads, timestamps, session tokens, or other elements that change on every page load.
Can I monitor hundreds of URLs in a single run? Yes. The actor processes URLs sequentially with a 30-second timeout per URL. For very large batches (500+ URLs), consider splitting them across multiple scheduled runs to stay within execution time limits.
What is the difference between text, HTML, and selector modes? Text mode uses Cheerio to strip all non-visible elements (scripts, styles, SVG, head) and normalizes whitespace for the cleanest comparison. HTML mode compares the full raw markup, catching structural changes but producing noisier diffs. Selector mode extracts text only from elements matching your CSS selector, giving you surgical precision over what gets compared.
How are snapshots stored?
Each URL's content, hash, and timestamp are stored in the Key-Value store under the key snapshot_{sha256(url)}. The URL is hashed to produce a safe key without special characters.
Does the actor support webhook notifications?
Yes, in two ways. First, you can set the webhookUrl input parameter to receive a JSON POST directly from the actor the moment a change is detected — no need to wait for the run to finish. Second, you can configure Apify platform webhooks to trigger when the entire run completes. For Slack, use the slackWebhookUrl input to get formatted change alerts with diff previews sent directly to your channel.
What does the notifyOnlyChanges flag do?
When set to true (the default), the actor only pushes items with status changed, new, or error to the dataset. Unchanged URLs are silently skipped. Set it to false to include all URLs in the output regardless of status.
Can I use this to verify my own website deployments? Yes. Add your own pages to the URL list and run the actor after a deployment. If the content hash changes as expected, your deployment went live. If it does not change, something may have gone wrong.
Help us improve
If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:
- Go to Account Settings > Privacy
- Enable Share runs with public Actor creators
This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.
Related actors
| Actor | Description | Link |
|---|---|---|
| Website Contact Scraper | Extract emails, phone numbers, and social links from any website. | Open |
| Website Content to Markdown | Convert web pages to clean Markdown format for documentation and archiving. | Open |
| Website Tech Stack Detector | Identify the technologies, frameworks, and tools powering any website. | Open |
| WHOIS Domain Lookup | Look up domain registration details including registrar, expiry dates, and nameservers. | Open |
| Brand Protection Monitor | Monitor the web for unauthorized use of your brand name and trademarks. | Open |
| SERP Rank Tracker | Track your keyword rankings across search engines over time. | Open |
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
Website Contact Scraper
Extract emails, phone numbers, team members, and social media links from any business website. Feed it URLs from Google Maps or your CRM and get structured contact data back. Fast HTTP requests, no browser — scrapes 1,000 sites for ~$0.50.
Email Pattern Finder
Discover the email format used by any company. Enter a domain like stripe.com and detect patterns like [email protected]. Then generate email addresses for any name. Combine with Website Contact Scraper to turn company websites into complete email lists.
Waterfall Contact Enrichment
Find business emails, phones, and social profiles from a name + company domain. Cascades through MX validation, website scraping, pattern detection, and SMTP verification. Free Clay alternative.
B2B Lead Qualifier - Score & Rank Company Leads
Score and rank B2B leads 0-100 by crawling company websites. Analyzes 30+ signals across contact reachability, business legitimacy, online presence, website quality, and team transparency. No AI keys needed.
Ready to try Website Change Monitor & Diff Tracker?
Start for free on Apify. No credit card required.
Open on Apify Store