What is the LLM Output Optimizer?
The LLM Output Optimizer analyzes actor output and recommends how to reduce token consumption when feeding data to large language models. It scores every field by information density — high-value fields like name, url, and email are kept, while low-value fields like raw HTML, internal IDs, and timestamps are flagged for removal. The optimizer reads sample output from the actor's most recent successful run and produces a field-by-field analysis with token counts, value classifications, and recommended actions (keep, drop, or truncate). It generates an optimized schema and reports expected token savings — typically 40-70%. LLM Output Optimizer costs $0.35 per analysis. It does not run the target actor — it reads from the latest dataset. Visit apifyforge.com/tools/llm-optimizer for documentation.
Related questions
The Schema Validator checks your actor's actual dataset output against the schema you declared in your dataset_schema.js...
What is the Test Runner?The Test Runner is a testing tool that runs your Apify actor with predefined test inputs and automatically validates the...
What is Cloud Staging?Cloud Staging runs your actor in Apify's actual production environment before you make it public on the Store, catching ...
What are Regression Tests?Regression tests are automated test suites that run before every publish to verify that new code changes have not broken...