Job Market Intelligence
Aggregate job listings from four free data sources, deduplicate them, and generate a structured intelligence report with skill demand rankings, salary benchmarks, top hiring companies, and remote-work statistics — all without any API keys.
Maintenance Pulse
90/100Cost Estimate
How many results do you need?
Pricing
Pay Per Event model. You only pay for what you use.
| Event | Description | Price |
|---|---|---|
| report-generated | Charged per market intelligence report. Aggregates jobs from 4 sources with skill extraction, salary parsing, deduplication, and market analysis. | $0.50 |
Example: 100 events = $50.00 · 1,000 events = $500.00
Documentation
Aggregate job listings from four free data sources, deduplicate them, and generate a structured intelligence report with skill demand rankings, salary benchmarks, top hiring companies, and remote-work statistics — all without any API keys.
The actor queries Remotive, Arbeitnow, Jobicy, and Hacker News "Who's Hiring" threads in parallel, normalizes the results into a single schema, applies your filters (location, company, date, remote-only), and pushes both the raw listings and a summary report to the Apify dataset.
Why Use This Actor?
Understanding the job market requires data from multiple sources. Checking each job board manually is slow, inconsistent, and impossible to scale. This actor solves that by:
- Aggregating 4 job boards in one run — Remotive (remote tech jobs), Arbeitnow (European focus), Jobicy (remote-first), and HN Who's Hiring (startup jobs) are queried in parallel, giving you broader coverage than any single source.
- Extracting skills automatically — 70+ technologies across 6 categories (Languages, Frameworks, Cloud, Data, AI/ML, DevOps) are detected by regex scanning every job description, then ranked by frequency.
- Computing salary benchmarks — Parses salary ranges from structured API fields and free-text descriptions (USD and EUR), then calculates min, max, median, and average across all listings.
- Zero configuration — No API keys, tokens, or credentials needed. Every data source is free and public.
Whether you're a job seeker researching skills, a recruiter benchmarking compensation, or a data journalist analyzing hiring trends, this actor delivers structured intelligence from raw job board data.
Features
- Multi-source aggregation — Fetches from four independent job boards in a single run, giving you broader market coverage than any single source
- Automatic skill extraction — Scans every job description for 70+ technologies across Languages, Frameworks, Cloud, Data, AI/ML, and DevOps categories, then ranks them by frequency
- Salary intelligence — Parses salary ranges from structured fields and free-text descriptions (USD and EUR), calculates min, max, median, and average across all listings
- Smart deduplication — Removes duplicate postings that appear on multiple boards so the same role is never counted twice
- Company hiring velocity — Ranks companies by the number of open positions to reveal who is hiring most aggressively
- Flexible filtering — Narrow results by keyword, location, company name, remote-only flag, and posting recency (last 24 hours, week, month, or any time)
- Zero API keys required — Every data source used is free and public. No tokens, no credentials, no rate-limit headaches
- Structured JSON output — Every listing follows the same normalized schema regardless of source, making downstream analysis and integration straightforward
How to Use
- Open the actor in the Apify Console and click "Start"
- Enter a search query such as "data engineer", "product manager", or "machine learning". This is the only required field
- Optionally refine your search with location, company name, remote-only toggle, date recency, or specific sources
- Run the actor and wait for it to finish (typically under 60 seconds). The dataset will contain a summary report as the first item, followed by individual job listings
- Export or integrate — download results as JSON, CSV, or Excel, or connect the dataset to Zapier, Make, Google Sheets, or the Apify API for automated workflows
Input Parameters
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
query | String | Yes | "software engineer" | Job search keyword (e.g., "data scientist", "devops", "product manager") |
location | String | No | — | Filter by location substring (e.g., "San Francisco", "Europe", "Remote") |
companyName | String | No | — | Filter results to a specific company name |
remoteOnly | Boolean | No | false | When enabled, only remote positions are returned |
datePosted | Select | No | "month" | Posting recency: day (24h), week (7d), month (30d), or any |
sources | String List | No | All sources | Which boards to query: remotive, arbeitnow, jobicy, hn-whoishiring |
analyzeSkills | Boolean | No | true | Extract and rank mentioned technologies from job descriptions |
analyzeSalaries | Boolean | No | true | Parse salary data and compute min/max/median/average statistics |
maxResults | Integer | No | 100 | Maximum number of job listings to return (1–500) |
Input Examples
Broad market scan for data engineers:
{
"query": "data engineer",
"datePosted": "month",
"analyzeSkills": true,
"analyzeSalaries": true,
"maxResults": 200
}
Remote-only React developer roles in Europe:
{
"query": "react developer",
"location": "Europe",
"remoteOnly": true,
"datePosted": "week",
"sources": ["remotive", "arbeitnow", "jobicy"]
}
Monitor a specific company's hiring:
{
"query": "engineer",
"companyName": "Stripe",
"maxResults": 50
}
Quick pulse check from HN startups only:
{
"query": "machine learning",
"sources": ["hn-whoishiring"],
"datePosted": "month",
"maxResults": 100
}
Tips for Input
- Start broad, then filter — Run a general query like "engineer" first to see the full landscape, then narrow with location or company filters in subsequent runs.
- Source selection — Remotive and Jobicy focus on remote roles, Arbeitnow covers European markets heavily, and HN Who's Hiring surfaces startup opportunities. Use
sourcesto target specific ecosystems. - Date filter —
day= last 24 hours,week= last 7 days,month= last 30 days,any= no time restriction.
Output Example
The dataset contains two types of records. The first item is always a summary report:
{
"type": "summary",
"query": "data engineer",
"location": null,
"analyzedAt": "2025-05-15T14:32:00.000Z",
"totalListings": 87,
"sourceBreakdown": {
"remotive": 24,
"arbeitnow": 31,
"jobicy": 18,
"hn-whoishiring": 14
},
"topSkills": [
{ "skill": "Python", "count": 62, "percentage": 71.3 },
{ "skill": "SQL", "count": 58, "percentage": 66.7 },
{ "skill": "AWS", "count": 41, "percentage": 47.1 },
{ "skill": "Spark", "count": 33, "percentage": 37.9 },
{ "skill": "Kafka", "count": 28, "percentage": 32.2 }
],
"salaryInsights": {
"dataPoints": 34,
"minSalary": 85000,
"maxSalary": 240000,
"medianSalary": 155000,
"averageSalary": 148500,
"currency": "USD"
},
"topHiringCompanies": [
{ "company": "DataBricks", "openings": 4 },
{ "company": "Snowflake", "openings": 3 },
{ "company": "Stripe", "openings": 2 }
],
"jobTypeBreakdown": {
"full-time": 71,
"contract": 12,
"unknown": 4
},
"remotePercentage": 82.8
}
Each subsequent item is a normalized job listing:
{
"type": "job",
"source": "remotive",
"title": "Senior Data Engineer",
"company": "Snowflake",
"location": "Worldwide",
"remote": true,
"jobType": "full-time",
"salaryMin": 160000,
"salaryMax": 210000,
"salaryCurrency": "USD",
"description": "We are looking for a Senior Data Engineer to build and maintain our core data platform...",
"skills": ["Python", "SQL", "Spark", "Kafka", "Airflow", "AWS", "Docker", "Kubernetes"],
"tags": ["data", "engineering", "big-data"],
"postedDate": "2025-05-12T08:00:00.000Z",
"url": "https://remotive.com/remote-jobs/software-dev/senior-data-engineer-12345",
"applyUrl": "https://remotive.com/remote-jobs/software-dev/senior-data-engineer-12345"
}
Output Fields — Summary Report
| Field | Type | Description |
|---|---|---|
type | string | Always "summary" for the report record |
query | string | The search query used |
location | string|null | Location filter applied (if any) |
analyzedAt | string | ISO timestamp of when the analysis ran |
totalListings | number | Total deduplicated job listings found |
sourceBreakdown | object | Count of listings per source (e.g., {"remotive": 24, "arbeitnow": 31}) |
topSkills | array | Top 30 skills ranked by frequency, each with skill, count, and percentage |
salaryInsights | object|null | Salary statistics: dataPoints, minSalary, maxSalary, medianSalary, averageSalary, currency |
topHiringCompanies | array | Top 20 companies by number of open positions, each with company and openings |
jobTypeBreakdown | object | Count per job type: full-time, part-time, contract, internship, temporary, unknown |
remotePercentage | number | Percentage of listings flagged as remote |
Output Fields — Job Listing
| Field | Type | Description |
|---|---|---|
type | string | Always "job" for individual listings |
source | string | Which board the listing came from: remotive, arbeitnow, jobicy, or hn-whoishiring |
title | string | Job title (extracted or parsed from source) |
company | string | Company name (HN listings may show "Unknown (HN)" if parsing fails) |
location | string|null | Job location (may be "Remote", a city, or null) |
remote | boolean | Whether the position is remote |
jobType | string|null | Normalized job type: full-time, part-time, contract, internship, temporary |
salaryMin | number|null | Minimum salary (annual, in stated currency) |
salaryMax | number|null | Maximum salary (annual, in stated currency) |
salaryCurrency | string|null | Currency code: USD or EUR |
description | string | Job description text (HTML stripped, max 2,000 chars) |
skills | string[] | Technologies detected in the description (e.g., ["Python", "AWS", "Docker"]) |
tags | string[] | Tags from the source API (empty for HN listings) |
postedDate | string|null | ISO timestamp of when the job was posted |
url | string | URL to the original listing |
applyUrl | string|null | Direct application URL (when available) |
Use Cases
- Job seekers — Search for roles matching your skills, compare salary ranges across companies, and discover which technologies are most in-demand for your target position
- Recruiters and talent acquisition teams — Monitor competitor hiring activity, understand which skills the market demands, and benchmark compensation packages before writing job descriptions
- HR and workforce planning analysts — Track hiring trends over time by scheduling periodic runs to build a longitudinal dataset of skill demand and salary movement
- Career coaches and bootcamp instructors — Identify the most requested programming languages, frameworks, and cloud platforms so you can align curriculum with real employer needs
- Startup founders — Research the talent landscape before hiring. See what competitors pay, which skills are scarce, and whether remote or on-site roles dominate your niche
- Data journalists and researchers — Gather structured, source-attributed job market data for articles, reports, or academic studies on labor economics and tech hiring
API & Programmatic Access
Python
from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("ryanclinton/job-market-intelligence").call(run_input={
"query": "data engineer",
"remoteOnly": True,
"analyzeSkills": True,
"analyzeSalaries": True,
"maxResults": 200,
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
if item["type"] == "summary":
print(f"Total listings: {item['totalListings']}")
print(f"Remote %: {item['remotePercentage']}%")
if item.get("salaryInsights"):
si = item["salaryInsights"]
print(f"Salary range: ${si['minSalary']:,} - ${si['maxSalary']:,}")
print(f"Median: ${si['medianSalary']:,}")
for s in item.get("topSkills", [])[:10]:
print(f" {s['skill']}: {s['count']} ({s['percentage']}%)")
else:
print(f"{item['company']} - {item['title']} ({item['source']})")
JavaScript
import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('ryanclinton/job-market-intelligence').call({
query: 'data engineer',
remoteOnly: true,
analyzeSkills: true,
analyzeSalaries: true,
maxResults: 200,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
const summary = items.find(i => i.type === 'summary');
const jobs = items.filter(i => i.type === 'job');
console.log(`Found ${summary.totalListings} listings, ${summary.remotePercentage}% remote`);
console.log('Top skills:', summary.topSkills.slice(0, 5).map(s => s.skill).join(', '));
jobs.forEach(j => console.log(`${j.company} - ${j.title} (${j.source})`));
cURL
# Start the actor
curl -X POST "https://api.apify.com/v2/acts/ryanclinton~job-market-intelligence/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"query": "data engineer",
"remoteOnly": true,
"analyzeSkills": true,
"maxResults": 200
}'
# Fetch results (use defaultDatasetId from the response above)
curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN&format=json"
How It Works — Technical Details
Input: query, location, remoteOnly, datePosted, sources, maxResults
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ PARALLEL FETCH (Promise.allSettled — failures don't crash run) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────┐ ┌─────────┐ │
│ │ Remotive │ │ Arbeitnow │ │ Jobicy │ │ HN │ │
│ │ │ │ │ │ │ │ Algolia │ │
│ │ GET /api/ │ │ GET /api/ │ │ GET /api │ │ GET /api│ │
│ │ remote-jobs │ │ job-board-api│ │ /v2/ │ │ /v1/ │ │
│ │ ?search=X │ │ ?search=X │ │ remote- │ │ search │ │
│ │ &limit=N │ │ &page=1..3 │ │ jobs │ │ ?query= │ │
│ │ │ │ │ │ ?count=N │ │ X&tags= │ │
│ │ Salary from │ │ Salary from │ │ &tag=X │ │ comment │ │
│ │ field + │ │ description │ │ │ │ ,ask_hn │ │
│ │ description │ │ regex │ │ Salary │ │ │ │
│ │ fallback │ │ │ │ from API │ │ Last │ │
│ │ │ │ created_at │ │ fields │ │ 90 days │ │
│ │ Remote-only │ │ = Unix epoch │ │ │ │ │ │
│ │ board │ │ │ │ Remote- │ │ Parse: │ │
│ │ │ │ European │ │ only │ │ company │ │
│ │ │ │ focus │ │ board │ │ from 1st│ │
│ │ │ │ │ │ │ │ line │ │
│ └──────┬───────┘ └──────┬───────┘ └────┬─────┘ └────┬────┘ │
│ │ │ │ │ │
└─────────┼─────────────────┼───────────────┼──────────────┼──────┘
│ │ │ │
▼ ▼ ▼ ▼
┌─────────────────────────────────────────────────────────┐
│ NORMALIZE to NormalizedJob schema │
│ (title, company, location, remote, salary, skills...) │
│ │
│ Skills: 70+ regex patterns across 6 categories │
│ Salary: USD/EUR regex from fields + description text │
│ Job type: normalize → full-time/part-time/contract/etc │
│ Description: strip HTML, max 2,000 chars │
└─────────────────────┬───────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ FILTER PIPELINE (sequential) │
│ │
│ 1. Date filter (day=24h, week=7d, month=30d) │
│ 2. Remote-only filter (j.remote === true) │
│ 3. Location filter (case-insensitive substring) │
│ └─ Graceful fallback: if ALL removed, re-include │
│ 4. Company name filter (case-insensitive substring) │
│ 5. Deduplication (company + first 60 chars of title) │
│ 6. Cap at maxResults │
└─────────────────────┬───────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ BUILD SUMMARY REPORT │
│ │
│ • Source breakdown (count per board) │
│ • Top 30 skills by frequency + percentage │
│ • Salary: min, max, median, average (sorted array) │
│ • Top 20 hiring companies by openings │
│ • Job type breakdown │
│ • Remote percentage │
└─────────────────────┬───────────────────────────────────┘
│
▼
Push to Dataset:
[summary, ...jobs]
Data Source Details
| Source | API Endpoint | Coverage | Salary Data | Notes |
|---|---|---|---|---|
| Remotive | remotive.com/api/remote-jobs | Remote tech jobs worldwide | Structured field + description regex | Single page, ?search=X&limit=N |
| Arbeitnow | arbeitnow.com/api/job-board-api | European focus, all job types | Description regex only | Paginated up to 3 pages, created_at is Unix timestamp |
| Jobicy | jobicy.com/api/v2/remote-jobs | Remote-first jobs | Structured annualSalaryMin/Max fields | ?count=N&tag=X |
| HN Who's Hiring | hn.algolia.com/api/v1/search | Startup jobs from monthly threads | Description regex only | Searches comments from last 90 days, parses company from first line |
Skill Detection System
The actor scans each job description against 70+ technology patterns organized into 6 categories:
| Category | Skills Detected |
|---|---|
| Languages | Python, JavaScript, TypeScript, Java, Rust, C++, Ruby, PHP, Swift, Kotlin, Scala, SQL, R, Go |
| Frameworks | React, Angular, Vue, Next.js, Django, Flask, Spring, Rails, Laravel, FastAPI, Express, Node.js, Svelte, NestJS, .NET |
| Cloud | AWS, Azure, GCP, Docker, Kubernetes, Terraform, CI/CD, Jenkins, GitHub Actions, CloudFormation |
| Data | PostgreSQL, MongoDB, Redis, Elasticsearch, Kafka, Spark, Snowflake, BigQuery, Airflow, MySQL, DynamoDB, Cassandra, Redshift |
| AI/ML | Machine Learning, Deep Learning, NLP, Computer Vision, PyTorch, TensorFlow, LLM, GPT, RAG, Generative AI, Neural Network |
| Other | Git, Linux, Agile, REST, GraphQL, gRPC, Microservices, Scrum, DevOps, SRE |
Special handling: R and Go use context-aware regex to avoid false positives (e.g., "R" only matches when near "programming", "language", or other languages; "Go" matches "Golang" or "Go" in programming context).
Salary Extraction
Salary parsing uses multiple regex patterns applied to both structured API fields and free-text descriptions:
| Pattern | Example | Currency |
|---|---|---|
$Xk - $Xk | $120k - $180k | USD |
$X,XXX - $X,XXX | $120,000 - $180,000 | USD |
$Xk/year | $150k/year | USD |
$X,XXX/year | $150,000/year | USD |
€X - €X | €50,000 - €80,000 | EUR |
Values under 1,000 are automatically multiplied by 1,000 (treating "150" as "$150k"). The summary report computes statistics from the sorted union of all min and max salary values.
Deduplication Algorithm
Duplicate detection uses a composite key: company.toLowerCase().trim() + "::" + title.toLowerCase().trim().slice(0, 60). The first listing encountered for each key is kept; subsequent duplicates are discarded. This catches the same job posted across multiple boards.
HN Who's Hiring Comment Parsing
Hacker News comments are unstructured text. The actor extracts structured data via:
- Company: Regex on first line:
^([A-Z][A-Za-z0-9\s&.'-]+?)[\s]*[|(\-–]/(expects "Company | Role" format) - Role: Matches patterns like "hiring/looking for/seeking X" or "Company | X"
- Remote: Word boundary match for
/\bremote\b/i - Location: Matches "location/based in/office in: X"
- Minimum length: Comments under 50 characters are skipped
How Much Does It Cost?
The Job Market Intelligence actor uses minimal compute resources because it calls lightweight REST APIs rather than rendering web pages. No proxies are required.
| Scenario | Sources | Estimated Time | Estimated Cost |
|---|---|---|---|
| Quick scan (1 source, 50 results) | 1 | ~10 seconds | ~$0.005 |
| Standard run (all sources, 100 results) | 4 | ~30 seconds | ~$0.02 |
| Deep scan (all sources, 500 results) | 4 | ~60 seconds | ~$0.03 |
Each run typically completes in under 60 seconds using 256 MB of memory. A single run costs approximately $0.02 in platform credits.
| Plan | Monthly Credits | Approximate Runs |
|---|---|---|
| Free | $5 | ~250 runs |
| Starter ($49/mo) | $49 | ~2,450 runs |
| Scale ($499/mo) | $499 | ~24,950 runs |
Tips
- Start broad, then filter — Run a general query like "engineer" first to see the full landscape, then narrow with location or company filters in subsequent runs.
- Combine sources strategically — Remotive and Jobicy focus on remote roles, Arbeitnow covers European markets heavily, and HN Who's Hiring surfaces startup opportunities. Use the
sourcesparameter to target specific ecosystems. - Schedule weekly runs to build a time-series dataset of skill demand trends. Export to Google Sheets and chart how Python vs. Rust demand changes month over month.
- Use
maxResults: 500for comprehensive market reports, or keep it at 50 for quick daily pulse checks. - Filter by company name to monitor a specific competitor's hiring velocity — a sudden spike in open roles often signals a new product launch or funding round.
- Disable salary or skill analysis with the toggle fields if you only need raw listings. This slightly reduces processing time for very large result sets.
Limitations
- Source coverage — Only four job boards are queried. Major platforms like LinkedIn, Indeed, and Glassdoor are not included due to their authentication requirements and anti-bot measures.
- Salary data availability — Not all listings include salary information. The salary statistics are based only on listings that provide parseable salary data, which may skew toward certain markets or seniority levels.
- Currency support — Only USD (
$) and EUR (€) salary patterns are recognized. Salaries in GBP, CAD, AUD, or other currencies will not be extracted into structured salary fields. - Skill detection scope — The 70+ skill patterns are tuned for technology roles. Non-tech skills (e.g., "project management", "sales") are not tracked. False positives are possible for ambiguous terms.
- HN comment parsing — Hacker News "Who's Hiring" comments are free-form text. Company name, role, and location extraction is best-effort via regex and may produce incorrect results for non-standard formats.
- No direct application — The actor collects listing URLs but does not submit job applications on your behalf.
- Real-time freshness — Data comes from live API calls, but the underlying job boards may have their own delays in indexing new postings.
- Deduplication limits — The deduplication key uses company name + first 60 characters of the title. Listings with slightly different titles for the same role may not be caught.
Responsible Use
This actor accesses only publicly available job board APIs that are designed for programmatic access. It does not bypass authentication, scrape private data, or violate any terms of service. When using job market data:
- Use data for legitimate research, job seeking, or workforce planning purposes
- Do not use automated data to discriminate against job seekers or companies
- Respect the intellectual property of job descriptions and company information
- Comply with all applicable employment and data protection laws in your jurisdiction
- See Apify's guide on web scraping legality for general guidance
FAQ
Do I need any API keys to use this actor? No. All four data sources (Remotive, Arbeitnow, Jobicy, HN Algolia) are free public APIs. No authentication is required.
How many jobs can I get per run? The actor can return up to 500 listings per run. The actual count depends on how many matches exist for your query across all four sources.
Does this actor work for non-tech jobs? Yes. While the skill extraction is tuned for technology roles, the job search itself works for any keyword — "marketing manager", "nurse", "accountant", or any other role. The skill analysis will simply return fewer matches for non-tech positions.
How fresh is the data?
Listings come directly from live APIs at run time. Use the datePosted filter to restrict results to the last 24 hours, week, or month. Data is never cached between runs.
Can I filter for a specific country or city?
Yes. Enter the location in the location field (e.g., "Germany", "London", "USA"). The actor performs a case-insensitive substring match against each listing's location field. If the filter removes all results, the actor gracefully falls back to including all listings.
What does the hn-whoishiring source cover?
It searches Hacker News "Who is Hiring?" monthly threads via the Algolia search API (last 90 days). These contain direct hiring posts from startup founders and engineering managers — often with roles not listed on traditional job boards.
How does deduplication work? The actor generates a key from the lowercased company name and first 60 characters of the job title. If two listings share the same key, only the first one encountered is kept.
Can I run this on a schedule? Absolutely. Set up a schedule in the Apify Console (e.g., daily at 9 AM) to build a longitudinal dataset. Each run appends to the same named dataset if you configure it that way.
What currencies are supported for salary extraction? The parser recognizes USD ($) and EUR (€) salary patterns. Salaries in other currencies may appear in the description text but will not be extracted into the structured salary fields.
Why does the summary show salaryInsights: null?
This happens when no listings in your results contain parseable salary data. Try broadening your query or using sources that more frequently include salary information (Jobicy has structured salary fields).
Integrations
Connect the Job Market Intelligence actor to your existing tools and workflows:
- Zapier — Trigger actions in 5,000+ apps when new job listings are found
- Make — Build complex job monitoring automation workflows
- Google Sheets — Export job data directly to spreadsheets for analysis
- Slack — Get instant notifications when new jobs matching your criteria appear
- The Apify API — Programmatic access to results via REST API
- Apify Webhooks — Trigger custom actions when a run finishes
Related Actors
| Actor | Use Case |
|---|---|
| ryanclinton/website-contact-scraper | Extract emails, phone numbers, and social links from company websites found in job listings |
| ryanclinton/b2b-lead-gen-suite | Combine multiple data sources to build enriched B2B lead lists |
| ryanclinton/company-deep-research | Deep-dive into a specific company with financial, social, and web data |
| ryanclinton/github-repo-search | Find open-source projects from companies that appear in your job market results |
| ryanclinton/website-tech-stack-detector | Identify the technology stack a hiring company actually uses on their website |
| ryanclinton/serp-rank-tracker | Monitor search engine rankings for job-related keywords |
How it works
Configure
Set your parameters in the Apify Console or pass them via API.
Run
Click Start, trigger via API, webhook, or set up a schedule.
Get results
Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.
Use cases
Sales Teams
Build targeted lead lists with verified contact data.
Marketing
Research competitors and identify outreach opportunities.
Data Teams
Automate data collection pipelines with scheduled runs.
Developers
Integrate via REST API or use as an MCP tool in AI workflows.
Related actors
Website Content to Markdown
Convert any website to clean Markdown for RAG pipelines, LLM training, and AI apps. Crawls pages, strips boilerplate, preserves headings, tables, and code blocks. GFM support.
PubMed Biomedical Literature Search
Search and extract structured metadata from PubMed, the world's largest biomedical literature database with over 37 million citations. Query by keyword, author, journal, date range, and article type using the NCBI E-utilities API. Returns clean JSON with titles, authors, DOIs, PMC IDs, journal details, and direct PubMed links -- ready for systematic reviews, bibliometric analysis, and research monitoring.
Company Deep Research - Business Intelligence Agent
Generate comprehensive company research reports from 7+ sources: SEC filings, stock data, Wikipedia, GitHub, Trustpilot reviews, DNS records, and social media verification.
CFPB Consumer Complaint Search
Search and extract consumer complaint data from the CFPB database. Essential for compliance screening, AML due diligence, KYC risk assessment, and regulatory monitoring.
Ready to try Job Market Intelligence?
Start for free on Apify. No credit card required.
Open on Apify Store