LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • Changelog
  • History
    • 2025
      • 04.2025
      • 03.2025
      • 02.27.2025
      • 02.14.2025
      • 01.21.2025
    • 2024
      • 12.19.2024
      • 12.05.2024
      • 11.07.2024
      • 10.24.2024
      • 10.03.2024
      • 09.19.2024
      • 09.05.2024
      • 08.23.2024
      • 08.08.2024
      • 07.10.2024
      • 06.07.2024
      • 05.10.2024
      • 04.25.2024
      • 04.12.2024
      • 03.28.2024
      • 03.18.2024
      • 02.16.2024
      • 01.18.2024
      • 01.04.2024
    • 2023 and older
      • 2023
        • 12.07.2023
        • 11.14.2023
        • 10.26.2023
        • 10.11.2023
        • 08.29.2023
        • 07.17.2023
        • 06.05.2023
        • 05.08.2023
        • 04.17.2023
        • 03.13.2023
        • 02.13.2023
        • 01.24.2023
      • 2022
        • 12.19.2022
        • 11.28.2022
        • 11.07.2022
        • 10.10.2022
        • 09.26.2022
        • 09.12.2022
        • 08.29.2022
        • 08.15.2022
        • 08.01.2022
        • 07.18.2022
        • 06.28.2022
        • 06.06.2022
        • 05.23.2022
        • 05.09.2022
        • 04.25.2022
        • 04.11.2022
        • 03.28.2022
        • 03.14.2022
        • 02.28.2022
        • 02.14.2022
        • 01.31.2022
        • 01.18.2022
      • 2021
        • 12.15.2021: CVE-44228
        • 12.13.2021
        • 11.18.2021
        • 11.02.2021
        • 10.15.2021
        • 09.27.2021
        • 06.25.2021
        • 05.28.2021
        • 04.30.2021
        • 04.16.2021
        • 04.02.2021
        • 03.12.2021
        • 02.26.2021
Powered by GitBook
On this page
  • Major design refresh in Arize AX
  • Custom code evaluators
  • Security audit logs for enterprise customers
  • Larger dataset runs in prompt playground
  • Evaluations on experiments
  • Evaluations on experiments
  • Cancel running background tasks
  • Improved UI for functions in prompt playground
  • Compare prompts side by side
  • Image segmentation support for CV models
  • New time selector on your traces
  • Prompt hub python SDK
  • View task run history and errors
  • Run evals and tasks over a date range

Was this helpful?

  1. History
  2. 2025

04.2025

Our April 2025 releases

Last updated 2 days ago

Was this helpful?

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright © 2025 Arize AI, Inc

Major design refresh in Arize AX

We've refreshed Arize AX with polished fonts, spacing, color, and iconography throughout the whole platform.

Custom code evaluators

Security audit logs for enterprise customers

Larger dataset runs in prompt playground

We've increased the row limit for datasets in the playground, so you can run prompts in parallel on up to 100 examples.

Evaluations on experiments

Evaluations on experiments

Cancel running background tasks

Improved UI for functions in prompt playground

Compare prompts side by side

Compare the outputs of a new prompt and the original prompt side-by-side. Tweak model parameters and compare results across your datasets.

Image segmentation support for CV models

We now support logging image segmentation to Arize. Log your segmentation coordinates and compare your predictions vs. your actuals.

New time selector on your traces

Prompt hub python SDK

pip install "arize[PromptHub]"

View task run history and errors

Run evals and tasks over a date range

Easily run your online evaluation tasks over historical data.

You can now run your own custom python code evaluators in Arize against your data in a secure environment. Use background tasks to run any custom code, such as URL validations, or keyword match.

Improve your compliance and policy adherence. You can now use audit logs to monitor data access in Arize. Note: This feature is completely opt-in and this tracking is not enabled unless a customer explicitly asks for it.

You can now create and run evals on your experiments from the UI. Compare performance across different prompt templates, models, or configurations without code.

You can now create and run evals on your experiments from the UI. Compare performance across different prompt templates, models, or configurations without code.

When running evaluations using background tasks, you can now cancel them mid-flight while observing task logs.

We've made it easier to view, test, and validate your tool calls in prompt playground.

We’ve made it way easier to drill into specific time ranges, with quick presets like "last 15 minutes" and custom shorthand for specific dates and times, such as 10d ,4/1 - 4/6, 4/1 3:00am .

Access and manage your prompts in code with support for OpenAI and VertexAI.

Get full visbility into your evaluation task runs, including when it ran, what triggered it, and if there were errors.

April 26, 2025
Learn more
April 25, 2025
Learn more
April 24, 2025
April 24, 2025
Learn more →
April 24, 2025
Learn more →
April 24, 2025
Learn more →
April 21, 2025
Learn more →
April 15, 2025
Learn more →
April 14, 2025
Learn more →
April 11, 2025
Learn more →
April 7, 2025
Learn more
April 4, 2025
Learn more →
April 2, 2025
April 28, 2025
Select "Run Over Date Range" to get started