Development & Debugging

How do I handle Apify actor timeouts?

Actor timeouts occur when your actor exceeds its configured time limit (set in actor.json, default is 3600 seconds / 1 hour). The fix depends on whether the timeout is caused by the actor doing too much work, getting stuck on slow operations, or encountering infinite loops. Here is how to diagnose and fix each cause. Too much work per run — if your actor tries to process thousands of pages or API calls in a single run, it may legitimately need more time. Solutions: increase the timeout in actor.json (up to 86400 seconds / 24 hours for paid plans), implement pagination in your actor input so users can process data in smaller batches, or add request queuing with configurable concurrency (maxConcurrency in Crawlee) to process requests in parallel rather than sequentially. Stuck on slow operations — if your actor hangs on specific network requests, implement request-level timeouts using AbortController or the timeout option in fetch/axios. A common pattern is setting a 30-second timeout per individual request so that a single slow target does not stall the entire run. For Crawlee-based actors, set navigationTimeoutSecs and requestHandlerTimeoutSecs in the crawler configuration. Infinite loops — these are bugs where your actor enters a cycle it never exits, such as continuously re-crawling the same URL or retrying a permanently failing request. Add logging at key decision points so you can see the actor's progress in the run log and identify where it gets stuck. For Crawlee-based scrapers, the most effective timeout prevention measures are: set maxRequestsPerCrawl to cap the total number of pages processed, set maxRequestRetries to limit retries on failing URLs (default is 3, which is usually fine), and implement a progress callback that logs completion percentage so you can monitor whether the actor is making forward progress. From a fleet management perspective, ApifyForge tracks timeout rates across your actors in the dashboard. If you see a spike in timeouts, it often indicates an external change — the target website became slower, added more anti-bot measures, or restructured its pages. For related debugging advice, see the questions about debugging failed runs and what happens when an actor fails.

Related questions