Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Integrations
  • SDK and API Reference
  • Release Notes
  • Resources
  • Release Notes
  • 05.09.2025: Annotations, Data Retention Policies, Hotkeys 📓
  • 05.05.2025: OpenInference Google GenAI Instrumentation
  • 04.30.2025: Span Querying & Data Extraction for Phoenix Client 📊
  • 04.28.2025: TLS Support for Phoenix Server 🔐
  • 04.28.2025: Improved Shutdown Handling 🛑
  • 04.25.2025: Scroll Selected Span Into View 🖱️
  • 04.18.2025: Tracing for MCP Client-Server Applications 🔌
  • 04.16.2025: API Key Generation via API 🔐
  • 04.15.2025: Display Tool Call and Result IDs in Span Details 🫆
  • 04.09.2025: Project Management API Enhancements ✨
  • 04.09.2025: New REST API for Projects with RBAC 📽️
  • 04.03.2025: Phoenix Client Prompt Tagging 🏷️
  • 04.02.2025 Improved Span Annotation Editor ✍️
  • 04.01.2025: Support for MCP Span Tool Info in OpenAI Agents SDK 🔨
  • 03.27.2025 Span View Improvements 👀
  • 03.24.2025: Tracing Configuration Tab 🖌️
  • 03.21.2025: Environment Variable Based Admin User Configuration 🗝️
  • 03.20.2025: Delete Experiment from Action Menu 🗑️
  • 03.19.2025: Access to New Integrations in Projects 🔌
  • 03.18.2025: Resize Span, Trace, and Session Tables 🔀
  • 03.14.2025: OpenAI Agents Instrumentation 📡
  • 03.07.2025: Model Config Enhancements for Prompts 💡
  • 03.07.2025: New Prompt Playground, Evals, and Integration Support 🦾
  • 03.06.2025: Project Improvements 📽️
  • 02.19.2025: Prompts 📃
  • 02.18.2025: One-Line Instrumentation⚡️
  • 01.18.2025: Automatic & Manual Span Tracing ⚙️
  • 12.09.2024: Sessions 💬
  • 11.18.2024: Prompt Playground 🛝
  • 09.26.2024: Authentication & RBAC 🔐
  • 07.18.2024: Guardrails AI Integrations💂
  • 07.11.2024: Hosted Phoenix and LlamaTrace 💻
  • 07.03.2024: Datasets & Experiments 🧪
  • 07.02.2024: Function Call Evaluations ⚒️
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page

Was this helpful?

07.02.2024: Function Call Evaluations ⚒️

Available in Phoenix 4.6+

Previous07.03.2024: Datasets & Experiments 🧪

Last updated 23 days ago

Was this helpful?

We are introducing a new built-in function call evaluator that scores the function/tool-calling capabilities of your LLMs. This off-the-shelf evaluator will help you ensure that your models are not just generating text but also effectively interacting with tools and functions as intended.

This evaluator checks for issues arising from function routing, parameter extraction, and function generation.

Check out a .

full walkthrough of the evaluator