Tools

What is the LLM Output Optimizer?

The LLM Output Optimizer analyzes actor output and recommends how to reduce token consumption when feeding data to large language models. It scores every field by information density — high-value fields like name, url, and email are kept, while low-value fields like raw HTML, internal IDs, and timestamps are flagged for removal. The optimizer reads sample output from the actor's most recent successful run and produces a field-by-field analysis with token counts, value classifications, and recommended actions (keep, drop, or truncate). It generates an optimized schema and reports expected token savings — typically 40-70%. LLM Output Optimizer costs $0.35 per analysis. It does not run the target actor — it reads from the latest dataset. Visit apifyforge.com/tools/llm-optimizer for documentation.

Related questions