11.14.2023
New Releases, Enhancements, Changes + Arize in the News!
Last updated
Was this helpful?
New Releases, Enhancements, Changes + Arize in the News!
Last updated
Was this helpful?
To effectively track the usage of these LLMs over time, it is imperative to have dashboards that visualize the core attributes of the LLM systems and applications. Arize supports tracking core fields for LLMs by easily defining fields that designate LLM token usage and latency as part of the Arize schema. Learn more here.
Users can now iterate on prompts in the Prompt Playground using the Azure OpenAI Integration. This integration allows users to iterate on prompt templates, parameters, and variables in the platform and compare responses. Additionally, users can now compare LLM providers by comparing prompt runs between LLMs. Learn more here.
Logging a corpus dataset for retrieval troubleshooting has now become easier. And with the addition of connecting lines between the user query and context retrieved, troubleshooting retrieval is faster. By visualizing what context was retrieved, and how far the embeddings are from the user query, AI and ML engineers can better understand where context is missing from their knowledge base, or where irrelevant context is being retrieved. Learn more here.
Following OpenAI's recent release, Arize now supports GPT-4 Turbo. Users can now iterate on promp templates and compare performance across LLMs in Prompt Playground.
Table integrations allow users wanting to ingest their own generative models to do so through their BigQuery, Databricks, or Snowflake tables. This simplifies the process for ingesting generative models with Arize.
Users can now set a manual threshold value and bulk update all managed drift monitors. Within any model's Monitor's tab, navigate to the config via the 'Setup Monitors' tab by clicking on the 'Edit Drift Config' link or directly through the Config tab.
prompt
and response
are no longer required for generative models. Prompts / responses can now be logged on their own.
Embedding vectors are no longer required to send a prompt
or a response
.
The latest courses in our LLM Observability Certification Series:
AI ROI: Guide To Observability Value Statistics
Catch up on the latest in AI research papers with these new community readings: