What is the LLM Output Optimizer?
By Ryan Clinton · Updated Mar 1, 2026
The LLM Output Optimizer analyzes actor output and recommends how to reduce token consumption when feeding data to large language models. It scores every field by information density — high-value fields like name, url, and email are kept, while low-value fields like raw HTML, internal IDs, and timestamps are flagged for removal.
The optimizer reads sample output from the actor's most recent successful run and produces a field-by-field analysis with token counts, value classifications, and recommended actions (keep, drop, or truncate). It generates an optimized schema and reports expected token savings — typically 40-70%.
LLM Output Optimizer costs $0.35 per analysis. It does not run the target actor — it reads from the latest dataset. Visit apifyforge.com/tools/llm-optimizer for documentation.
Related term
An Apify Dataset is a structured, append-only storage system designed for tabular data produced by actor runs.
Related questions
The Output Guard checks your actor's actual dataset output against the schema you declared in your dataset_schema.json f...
What is the Test Runner?The Deploy Guard is a testing tool that runs your Apify actor with predefined test inputs and automatically validates th...
What is Cloud Staging?Cloud Staging runs your actor in Apify's actual production environment before you make it public on the Store, catching ...
What are Regression Tests?Regression tests are automated test suites that run before every publish to verify that new code changes have not broken...