Website Contact Scraper Pro
Extract emails, phone numbers, team member names, and social media links from **JavaScript-heavy websites** — React, Angular, Vue, and other single-page applications (SPAs) that render content with JavaScript.
Maintenance Pulse
90/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| website-scanned | Charged per website scanned with Playwright browser rendering. Includes JavaScript execution, SPA rendering, multi-page crawling, and full contact extraction (emails, phones, names, social links). | $0.15 |
Example: 100 events = $15.00 · 1,000 events = $150.00
Documentation
Extract emails, phone numbers, team member names, and social media links from JavaScript-heavy websites — React, Angular, Vue, and other single-page applications (SPAs) that render content with JavaScript.
This is the Playwright-powered version of Website Contact Scraper. It launches a real Chromium browser to render pages before extracting contacts, so it works on sites where the regular version returns empty results.
When to use Pro vs. Regular:
- Regular version — Fast and cheap. Works on 90%+ of business websites that serve contact info as static HTML. Use this first.
- Pro version (this one) — For the remaining sites built with React, Angular, Vue, Next.js, Nuxt, or other JavaScript frameworks that render content client-side. Uses more memory and takes longer, but gets contacts that HTTP-only scrapers miss.
How to scrape contacts from JavaScript-rendered websites
- Go to the Website Contact Scraper Pro Actor page on Apify
- Paste your website URLs into the URLs field (one per line, or as a JSON array)
- Click Start and wait for the browser-based scraping to complete
- Download your results as JSON, CSV, Excel, or export directly to Google Sheets
Each website produces one clean record with all discovered emails, phones, names, and social links — identical output format to the regular version.
What data can you extract?
For each website you provide, this Actor:
- Launches a Chromium browser and visits the homepage
- Waits for JavaScript to finish rendering (handles SPAs, lazy-loaded content, client-side routing)
- Discovers contact, about, and team pages and follows those links (up to 5 pages per domain by default)
- Extracts all contact information from the rendered DOM:
- Email addresses from
mailto:links and page content - Phone numbers from
tel:links and formatted numbers in contact sections - People's names and job titles from team/about pages
- Social media links (LinkedIn, Twitter/X, Facebook, Instagram, YouTube)
- Email addresses from
- Deduplicates and returns one clean record per website
Use cases
Scraping React/Angular/Vue company sites
Many modern company websites are built as SPAs. Their "Contact Us" or "Team" page content is rendered entirely by JavaScript — a simple HTTP request returns an empty <div id="root">. This Actor renders the full page in a browser, then extracts contacts from the rendered DOM.
Lead generation from JavaScript-heavy directories
Some business directories and listing sites use JavaScript frameworks. When the regular scraper returns empty results, switch to Pro to get the data.
CRM enrichment for tech companies
Tech companies are more likely to use JavaScript frameworks for their websites. If your prospect list skews toward SaaS, startups, or tech firms, Pro will have a higher success rate than HTTP-only scraping.
Fallback for the regular scraper
Run the regular Website Contact Scraper first. For any sites that return pagesScraped: 1 with no contacts, re-run those URLs through this Pro version.
Input
| Field | Type | Description | Default |
|---|---|---|---|
urls | Array of strings | Website URLs to scrape (required) | -- |
maxPagesPerDomain | Integer (1-20) | Max pages to crawl per website | 5 |
includeNames | Boolean | Extract people's names and job titles | true |
includeSocials | Boolean | Extract social media profile links | true |
proxyConfiguration | Object | Proxy settings (recommended for 50+ sites) | Apify Proxy |
Example input
{
"urls": [
"https://vercel.com",
"https://linear.app",
"https://notion.so"
],
"maxPagesPerDomain": 5,
"includeNames": true,
"includeSocials": true
}
Output
Each website produces one record in the dataset. The output format is identical to the regular Website Contact Scraper.
Example output
{
"url": "https://linear.app",
"domain": "linear.app",
"emails": [
"[email protected]"
],
"phones": [],
"contacts": [
{
"name": "Karri Saarinen",
"title": "Co-founder & CEO"
},
{
"name": "Tuomas Artman",
"title": "Co-founder & CTO"
}
],
"socialLinks": {
"linkedin": "https://www.linkedin.com/company/linear-app",
"twitter": "https://x.com/linear"
},
"pagesScraped": 3,
"scrapedAt": "2026-03-18T12:00:00.000Z"
}
Output fields
| Field | Type | Description |
|---|---|---|
url | String | The original website URL |
domain | String | Domain name (without www) |
emails | Array | Discovered email addresses (deduplicated) |
phones | Array | Discovered phone numbers |
contacts | Array | Named contacts with name, title, and optionally email |
socialLinks | Object | Social media profile URLs (linkedin, twitter, facebook, instagram, youtube) |
pagesScraped | Integer | Number of pages crawled on this domain |
scrapedAt | String | ISO timestamp of when the scrape completed |
How to use the API
You can run this Actor programmatically and integrate it into your own applications.
Python
from apify_client import ApifyClient
client = ApifyClient(token="YOUR_API_TOKEN")
run = client.actor("ryanclinton/website-contact-scraper-pro").call(
run_input={
"urls": [
"https://vercel.com",
"https://linear.app",
"https://notion.so",
],
"maxPagesPerDomain": 5,
}
)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"{item['domain']}: {item['emails']}")
JavaScript / Node.js
import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('ryanclinton/website-contact-scraper-pro').call({
urls: [
'https://vercel.com',
'https://linear.app',
'https://notion.so',
],
maxPagesPerDomain: 5,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach(item => {
console.log(`${item.domain}: ${item.emails}`);
});
cURL
curl -X POST "https://api.apify.com/v2/acts/ryanclinton~website-contact-scraper-pro/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"urls": ["https://vercel.com", "https://linear.app"],
"maxPagesPerDomain": 5
}'
How it finds contacts
This Actor uses the same proven extraction logic as the regular Website Contact Scraper, but adds a browser rendering step before extraction.
Browser rendering -- Each page is loaded in a Chromium browser via Playwright. The Actor waits for networkidle state, which means all JavaScript has executed and dynamic content is rendered. This handles React hydration, Angular bootstrapping, Vue mounting, lazy-loaded components, and client-side routing.
Page discovery -- Reads the homepage and specifically follows links to pages that matter: contact, about, team, people, staff, leadership, and management pages. This keeps scraping fast and focused.
Email extraction -- Checks mailto: links first (most reliable), then scans rendered page content for email patterns. Filters out common junk like noreply@, example@, and image file false positives.
Phone extraction -- Prioritizes tel: links (reliable and intentionally published). For text-based numbers, only looks in contact sections and footers, and only matches numbers with clear formatting (not random digit sequences).
Name extraction -- Uses three strategies:
- Schema.org
Personmarkup (structured data, most reliable) - Common team card CSS patterns (
.team-member,.staff-member, etc.) - Heading + paragraph pairs where the paragraph contains job title keywords
Social links -- Extracts from <a> tags linking to LinkedIn, Twitter/X, Facebook, Instagram, and YouTube.
Regular vs. Pro comparison
| Feature | Regular | Pro (this one) |
|---|---|---|
| Engine | CheerioCrawler (HTTP only) | PlaywrightCrawler (Chromium browser) |
| JavaScript rendering | No | Yes |
| SPA support | No | Yes (React, Angular, Vue, etc.) |
| Speed | ~100 sites/minute | ~10-20 sites/minute |
| Memory usage | ~256 MB | ~4 GB |
| Concurrency | 10 concurrent pages | 5 concurrent pages |
| Request timeout | 30 seconds | 60 seconds |
| Cost per website | $0.15 | $0.15 |
| Output format | Identical | Identical |
Recommendation: Start with the regular version. Only use Pro for sites that return empty results with the regular scraper.
Combine with other Apify Actors
Website Contact Scraper Pro to Email Pattern Finder pipeline
Found team member names but no emails? Feed them into Email Pattern Finder to detect the company's email format and generate email addresses for every person.
Fallback pipeline: Regular then Pro
Run the regular scraper first for speed and cost. Re-run failures through Pro:
from apify_client import ApifyClient
client = ApifyClient(token="YOUR_API_TOKEN")
# Step 1: Fast scrape with regular version
fast_run = client.actor("ryanclinton/website-contact-scraper").call(
run_input={"urls": your_urls, "maxPagesPerDomain": 5}
)
# Step 2: Find sites that returned no contacts
empty_sites = []
for item in client.dataset(fast_run["defaultDatasetId"]).iterate_items():
if not item["emails"] and not item["contacts"]:
empty_sites.append(item["url"])
print(f"{len(empty_sites)} sites need browser rendering")
# Step 3: Re-run empty sites through Pro version
if empty_sites:
pro_run = client.actor("ryanclinton/website-contact-scraper-pro").call(
run_input={"urls": empty_sites, "maxPagesPerDomain": 5}
)
for item in client.dataset(pro_run["defaultDatasetId"]).iterate_items():
print(f"{item['domain']}: {item['emails']} | {len(item['contacts'])} contacts")
Score and rank leads
After extracting contacts, score them with B2B Lead Qualifier. It analyzes each company's website for 30+ business quality signals and tells you which leads are worth contacting first.
Performance and cost
This Actor uses PlaywrightCrawler (real Chromium browser) which uses more compute than the regular HTTP-only version:
| Websites | Estimated time | Estimated platform cost |
|---|---|---|
| 10 | ~1 minute | ~$0.10 |
| 100 | ~10 minutes | ~$1.00 |
| 500 | ~45 minutes | ~$5.00 |
| 1,000 | ~1.5 hours | ~$10.00 |
Estimates based on 5 pages per domain with 4 GB memory. Actual costs vary by site complexity, JavaScript payload size, and rendering time.
Why does it cost more compute than the regular version? The regular version makes simple HTTP requests (like curl). This Pro version launches a full Chromium browser for each page, executes all JavaScript, waits for rendering to complete, then extracts from the rendered DOM. That takes more CPU, more memory, and more time per page.
Tips for best results
- Try the regular version first -- Most business websites serve contact info as static HTML. Only use Pro for sites that return empty results with Website Contact Scraper.
- Allocate enough memory -- 4 GB is the default and works for most cases. For very heavy SPAs, try 8 GB.
- Use proxies -- Enable Apify Proxy when scraping 50+ sites to avoid rate limiting. Browser-based scraping is more likely to trigger bot detection.
- Lower concurrency for heavy sites -- The default of 5 concurrent browsers balances speed and stability. If you see out-of-memory errors, reduce
maxPagesPerDomain. - Check
pagesScraped-- If a site returnspagesScraped: 1with no data even in Pro mode, it likely uses a contact form (no visible email) or requires login.
Integrations
Export your results directly to:
- Google Sheets -- One-click export from the dataset view
- CSV / JSON / Excel -- Download in any format from the Apify Console
- Zapier / Make / n8n -- Automate workflows triggered when scraping completes
- API -- Access results programmatically via the Apify API (see code examples above)
- Webhooks -- Get notified when a run finishes and process results automatically
FAQ
When should I use Pro instead of the regular version?
Use Pro when the regular Website Contact Scraper returns empty results for a site you know has contact information. This typically happens with React, Angular, Vue, Next.js, and other JavaScript framework sites where the content is rendered client-side.
Why is this slower than the regular version?
The regular version makes fast HTTP requests and parses static HTML. This Pro version launches a Chromium browser, loads all JavaScript, CSS, fonts, images, and third-party scripts, then waits for the page to fully render before extracting data. That rendering step takes 3-10 seconds per page vs. milliseconds for HTTP requests.
Does this work on every website?
It works on any site that renders in a standard Chromium browser. It will not work on sites that require login, solve CAPTCHAs, or sites that serve different content to automated browsers vs. real users.
Is the output format the same as the regular version?
Yes, identical. You can use both versions interchangeably in your pipeline. The same code that processes regular scraper output works with Pro output.
Can I scrape thousands of sites with this?
Yes, but consider using the regular version for bulk scraping and only switching to Pro for sites that fail. A hybrid approach (regular first, Pro for failures) gives the best balance of speed, cost, and coverage. See the fallback pipeline example above.
Responsible use
This Actor extracts publicly available contact information from websites. By using it, you agree to:
- Comply with all applicable laws, including GDPR, CAN-SPAM, and CCPA
- Respect each website's Terms of Service and
robots.txt - Use extracted data only for legitimate business purposes (lead generation, market research, CRM enrichment)
- Not use this tool for unsolicited bulk email or spam
The Actor only accesses publicly available pages -- it cannot bypass logins, CAPTCHAs, or any access controls.
Limitations
- Higher resource usage -- Browser rendering uses significantly more CPU and memory than HTTP-only scraping. Allocate at least 4 GB.
- Slower -- Each page takes 3-10 seconds to render vs. milliseconds for the regular version.
- Login-protected pages -- Cannot access pages behind authentication.
- Contact forms only -- Sites that only have contact forms (no visible email/phone) won't yield email results.
- Bot detection -- Some sites detect automated browsers. Using residential proxies can help.
Pricing
- $0.15 per website scraped
- Only pay for successful results (no charge for sites that return empty)
| Websites | Price |
|---|---|
| 100 | $15 |
| 500 | $75 |
| 1,000 | $150 |
| 5,000 | $750 |
Note: Platform compute costs are separate and depend on memory allocation and runtime. See the performance table above for estimates.
Changelog
v1.0.0 (2026-03-18)
- Initial release
- Playwright-powered browser rendering for JavaScript-heavy sites
- Same extraction logic as Website Contact Scraper (emails, phones, names, social links)
- Optimized for SPAs: React, Angular, Vue, Next.js, Nuxt, etc.
- Lower concurrency (5 vs 10) and longer timeouts (60s vs 30s) for stability
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
GitHub Repository Search
Search GitHub repositories by keyword, language, topic, stars, forks. Sort by stars, forks, or recently updated. Returns metadata, topics, license, owner info, URLs. Free API, optional token for higher limits.
Weather Forecast Search
Get weather forecasts for any location worldwide using the free Open-Meteo API. Returns current conditions, daily and hourly forecasts with temperature, precipitation, wind, UV index, and more. No API key needed.
EUIPO EU Trademark Search
Search EU trademarks via official EUIPO database. Find registered and pending trademarks by name, Nice class, applicant, or status. Returns full trademark details and filing history.
Nominatim Address Geocoder
Geocode addresses to GPS coordinates and reverse geocode coordinates to addresses using OpenStreetMap Nominatim. Batch geocoding with rate limiting. Free, no API key needed.
Ready to try Website Contact Scraper Pro?
Start for free on Apify. No credit card required.
Open on Apify Store