LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • Changelog
  • History
    • 2025
      • 04.2025
      • 03.2025
      • 02.27.2025
      • 02.14.2025
      • 01.21.2025
    • 2024
      • 12.19.2024
      • 12.05.2024
      • 11.07.2024
      • 10.24.2024
      • 10.03.2024
      • 09.19.2024
      • 09.05.2024
      • 08.23.2024
      • 08.08.2024
      • 07.10.2024
      • 06.07.2024
      • 05.10.2024
      • 04.25.2024
      • 04.12.2024
      • 03.28.2024
      • 03.18.2024
      • 02.16.2024
      • 01.18.2024
      • 01.04.2024
    • 2023 and older
      • 2023
        • 12.07.2023
        • 11.14.2023
        • 10.26.2023
        • 10.11.2023
        • 08.29.2023
        • 07.17.2023
        • 06.05.2023
        • 05.08.2023
        • 04.17.2023
        • 03.13.2023
        • 02.13.2023
        • 01.24.2023
      • 2022
        • 12.19.2022
        • 11.28.2022
        • 11.07.2022
        • 10.10.2022
        • 09.26.2022
        • 09.12.2022
        • 08.29.2022
        • 08.15.2022
        • 08.01.2022
        • 07.18.2022
        • 06.28.2022
        • 06.06.2022
        • 05.23.2022
        • 05.09.2022
        • 04.25.2022
        • 04.11.2022
        • 03.28.2022
        • 03.14.2022
        • 02.28.2022
        • 02.14.2022
        • 01.31.2022
        • 01.18.2022
      • 2021
        • 12.15.2021: CVE-44228
        • 12.13.2021
        • 11.18.2021
        • 11.02.2021
        • 10.15.2021
        • 09.27.2021
        • 06.25.2021
        • 05.28.2021
        • 04.30.2021
        • 04.16.2021
        • 04.02.2021
        • 03.12.2021
        • 02.26.2021
Powered by GitBook

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright © 2025 Arize AI, Inc

On this page
  • What's New
  • Enhancements
  • 📚 New Content

Was this helpful?

  1. History
  2. 2024

12.19.2024

New Releases, Enhancements, + Changes

Last updated 3 months ago

Was this helpful?

What's New

Prompt Hub

is a centralized repository for managing, iterating, and deploying prompt templates within the Arize platform. It serves as a collaborative workspace for users to refine and store templates for various use cases, including production applications and experimentation.

Key features of the Prompt Hub include:

  • Template Management: Users can save templates directly from the Prompt Playground along with associated LLM parameters, function definitions, and metadata required to reproduce specific LLM calls.

  • Version Control: Every saved template supports versioning, enabling users to track updates, experiment with variations, and revert to previous versions if needed.

  • Collaboration and Reusability: Saved templates can be shared across teams, facilitating collaboration and consistency in production workflows. Templates can also be reloaded into the Prompt Playground or accessed via APIs for seamless integration into codebases and online tasks.

  • Evaluation and Optimization: By saving outputs as experiments, users can compare templates, compute evaluation metrics, and analyze performance both quantitatively and qualitatively.

Managed Code Evaluators

Evaluators available:

  • Matches Regex: Checks if text matches a specific regular expression pattern.

  • JSON Parseable: Validate JSON output from LLMs.

  • Contains Any Keyword: Check if any keywords appear in the text.

  • Contains All Keywords: Validate that all specified keywords are present.

Enhancements

Experiment Creation From Playground

  • Quickly Experiment: After running the playground successfully on a dataset, click the "Save as Experiment" button.

  • Debug: In addition to the newly outputted response, we save the LLM invocation parameters & prompt template message structure for greater replay functionality.

  • Compare: Just like our existing experiments, you can compare the playground outputs as well.

New Monitor Visualization

We’ve rolled out monitor improvements. Here's what's new:

  • Alert Status Graph: Maps directly to the alerts users see, giving them a transparent and seamless way to line up alerts with the real-time metric visualization.

  • Cleaner UX: Updates include removing "last run monitor time," aligning card titles and Y-axis with metric names, and simplifying by removing granularity.

Note: Alert ticks are limited—users may need to zoom into specific dates to see all alerts.

LangChain Instrumentation

Support for sessions via LangChain native thread tracking in TypeScript is now available. Easily track multi-turn conversations / threads using LangChain.js.

📚 New Content

The latest video tutorials, paper readings, ebooks, self-guided learning modules, and technical posts:

We recently launched to enable users to evaluate their spans without requiring requests to an LLM-as-a-Judge.

We just released a from outputs generated with the Prompt Playground. What's new?

a set of pre-built, off-the-shelf evaluators
new flow for creating experiments
The Prompt Hub
How Booking.com Personalizes Travel Planning with AI Trip Planner and Arize AI
How to Add LLM Evaluations to CI/CD Pipelines
2025 AI Conferences
Merge, Ensemble, and Cooperate! A Survey on Collaborative LLM Strategies