How do I resolve Phoenix Evals showing NOT_PARSABLE?
NOT_PARSABLE
errors often occur when LLM responses exceed the max_tokens
limit or produce incomplete JSON. Here's how to fix it:
Increase
max_tokens
: Update the model configuration as follows:Update Phoenix: Use version ≥0.17.4, which removes token limits for OpenAI and increases defaults for other APIs.
Check Logs: Look for
finish_reason="length"
to confirm token limits caused the issue.If the above doesn't work, it's possible the llm-as-a-judge output might not fit into the defined rails for that particular custom Phoenix eval. Double check the prompt output matches the rail expectations.
PreviousCan I use gRPC for trace collection?NextLangfuse alternative? Arize Phoenix vs Langfuse: key differences
Last updated
Was this helpful?