Compute Unit
An Apify Compute Unit (CU) is a measure of computing resources consumed by an actor run, calculated as memory allocated (in GB) multiplied by run duration (in hours). One compute unit equals 1 GB of memory running for 1 hour. A 512 MB actor running for 30 minutes uses 0.25 CU. A 4 GB PlaywrightCrawler running for 15 minutes uses 1.0 CU. At approximately $0.25 per compute unit on paid plans (with volume discounts available), compute costs are typically the smallest part of an Apify bill, but they add up quickly for high-volume operations or poorly optimized actors. Compute units matter because they are the baseline infrastructure cost for every actor run on Apify. Whether you run a free actor or a paid PPE actor, compute costs always apply. Users on the free plan get 0.5 CU per month (enough for light testing), while paid plans start at $49/month with 100 CU included. Understanding compute units is essential for both actor developers (to optimize costs and provide accurate pricing guidance) and users (to budget their scraping and automation workflows). To estimate compute costs before running an actor, use this formula: CU = (memory_in_MB / 1024) * (duration_in_seconds / 3600). For example, a CheerioCrawler actor using 512 MB that scrapes 1,000 pages in 2 minutes costs: (512/1024) * (120/3600) = 0.0167 CU, or about $0.004. The same 1,000 pages with a PlaywrightCrawler using 4 GB and taking 10 minutes costs: (4096/1024) * (600/3600) = 0.667 CU, or about $0.17. That is a 40x difference in compute cost for the same number of pages, which is why choosing the right crawler type matters enormously. To monitor compute usage, check the Apify Console at console.apify.com under the Usage tab. You can see CU consumption broken down by actor, by day, and by individual run. The API endpoint GET /v2/users/me/usage returns the same data programmatically. Set up billing alerts in the Console to get notified when your monthly usage approaches your plan's included CU allowance. Common mistakes with compute units include setting actor memory too high 'just in case' — a 4 GB allocation for a simple HTTP-based actor wastes 8x the compute of a properly sized 512 MB allocation. Another mistake is leaving test runs running indefinitely: a forgotten 1 GB actor running for 24 hours burns 24 CU ($6), which can drain a free plan's entire monthly allowance in one accident. Always set reasonable timeoutSecs in your actor configuration (e.g., 300 seconds for simple scrapers, 3600 seconds for large crawls). A third mistake is not using the memoryMbytes parameter when calling actors via the API — if you do not specify memory, the actor uses its default allocation, which may be higher than necessary for your specific input. To reduce compute costs as a developer, optimize your actor's memory footprint by using CheerioCrawler instead of PlaywrightCrawler where possible, limiting concurrency to avoid memory spikes, and streaming large datasets with Actor.pushData() in batches rather than accumulating everything in memory. As a user, compare actors that perform the same task — a well-optimized actor can be 10-50x cheaper in compute costs than a poorly written one, even if both produce the same output. Compute units are separate from PPE charges and proxy costs. A single actor run may incur all three: compute units for the infrastructure, PPE charges for the value delivered, and proxy data transfer fees for IP rotation. Check all three in the Usage tab to understand your total cost per run. Related concepts: PPE, Actor Run, Cheerio Crawler, Playwright Crawler, Proxy.