New Releases, Enhancements, Changes + Arize in the News!

What's New

Default Dashboards for LLM Token and Usage Tracking

To effectively track the usage of these LLMs over time, it is imperative to have dashboards that visualize the core attributes of the LLM systems and applications. Arize supports tracking core fields for LLMs by easily defining fields that designate LLM token usage and latency as part of the Arize schema. Learn more here.

Azure OpenAI Integration

Users can now iterate on prompts in the Prompt Playground using the Azure OpenAI Integration. This integration allows users to iterate on prompt templates, parameters, and variables in the platform and compare responses. Additionally, users can now compare LLM providers by comparing prompt runs between LLMs. Learn more here.

Corpus Dataset Support for RAG Applications

Logging a corpus dataset for retrieval troubleshooting has now become easier. And with the addition of connecting lines between the user query and context retrieved, troubleshooting retrieval is faster. By visualizing what context was retrieved, and how far the embeddings are from the user query, AI and ML engineers can better understand where context is missing from their knowledge base, or where irrelevant context is being retrieved. Learn more here.

GPT-4 Turbo Support

Following OpenAI's recent release, Arize now supports GPT-4 Turbo. Users can now iterate on promp templates and compare performance across LLMs in Prompt Playground.


Generative LLM Ingestion Through Table Integrations

Table integrations allow users wanting to ingest their own generative models to do so through their BigQuery, Databricks, or Snowflake tables. This simplifies the process for ingesting generative models with Arize.

Bulk Edit Drift Thresholds

Users can now set a manual threshold value and bulk update all managed drift monitors. Within any model's Monitor's tab, navigate to the config via the 'Setup Monitors' tab by clicking on the 'Edit Drift Config' link or directly through the Config tab.

Prompt and Response Ingestion Updates:

  • prompt and response are no longer required for generative models. Prompts / responses can now be logged on their own.

  • Embedding vectors are no longer required to send a prompt or a response.

🎓 New Educational Content

The latest courses in our LLM Observability Certification Series:

Structured Data Extraction

AI ROI: Guide To Observability Value Statistics

📚 New Paper Readings

Catch up on the latest in AI research papers with these new community readings:

Last updated