02.13.2023
New Releases, Enhancements, Changes + Arize in the News!
Last updated
Was this helpful?
New Releases, Enhancements, Changes + Arize in the News!
Last updated
Was this helpful?
Editable and dynamic distribution comparisons to visualize data and calculate drift metrics (i.e. PSI) with increased flexibility. Change binning options to use across monitors, performance tracing, and the model overview page.
Edit the distribution comparison by clicking on 'View Feature Details' on the Performance Tracing page. Learn more about best practices here.
Autogenerate embeddings by simply passing your input to Arize. From there, include the extracted embeddings in your pandas DataFrame to log to Arize.
Available in the Python SDK version >= 6.0.0. Learn more here.
Select a custom metric as your default performance metric to auto-populate a custom metric throughout the platform (performance tracing, drift, dimension details, and monitor views).
Navigate to the 'Config' tab to pick from a custom metrics dropdown menu.
Performance insight support for models evaluated using a custom metric. Surface a list of your model's worst-performing slices for faster root cause analysis.
Performance insights are automatically populated in the 'Performance Insights' card on the 'Performance Tracing' tab.
Edit monitors configurations directly in the monitor's tab for a simplified workflow. Previously located in the 'Config' tab, use this new location to adjust your integrations, evaluation windows, and alerting options where you set up monitors.
Kolmogorov Smirnov Test (KS Test)
Introducing Deep Papers, a new podcast series hosted by AI Pub & Arize to dive into some of the seminal research in AI.
We kick off by chatting about the research underpinning ChatGPT and InstructGPT with the key OpenAI data scientists behind training these large language models, their use, discovery, and value.
Arize AI is listed in the Gartner Guide for AI Trust, Risk, and Security Management (AI TRiSM) for the second year in a row! As the report notes, “monitoring AI production data for drift, bias, attacks, data entry and process mistakes is key to achieving optimal AI performance, and for protecting organizations from malicious attacks.”