LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • Arize AI
  • Quickstarts
  • โœจArize Copilot
  • Concepts
    • Agent Evaluation
    • Tracing
      • What is OpenTelemetry?
      • What is OpenInference?
      • Openinference Semantic Conventions
    • Evaluation
  • ๐ŸงชDevelop
    • Quickstart: Experiments
    • Datasets
      • Create a dataset
      • Update a dataset
      • Export a dataset
    • Experiments
      • Run experiment
        • Run experiment in code
        • Log experiment results via SDK
        • Experiments SDK differences in AX vs Phoenix
      • Evaluate experiment
        • Evaluate experiment in code
      • CI/CD with experiments
        • Github Action Basics
        • Gitlab CI/CD Basics
      • Download experiment
    • Prompt Playground
      • Span Replay
      • Compare Prompts Side-by-Side
      • Load a Dataset into Playground
      • Save Playground Outputs as an Experiment
      • Using Tools in Playground
      • Image Inputs in Playground
      • โœจCopilot: Prompt Builder
    • Playground Integrations
      • OpenAI
      • Azure OpenAI
      • AWS Bedrock
      • VertexAI
      • Custom LLM Models
    • Prompt Hub
  • ๐Ÿง Evaluate
    • Online Evals
      • Run evaluations in the UI
      • Run evaluations with code
      • Test LLM evaluator in playground
      • View task details & logs
      • โœจCopilot: Eval Builder
      • โœจCopilot: Eval Analysis
      • โœจCopilot: RAG Analysis
    • Experiment Evals
    • LLM as a Judge
      • Custom Eval Templates
      • Arize Templates
        • Agent Tool Calling
        • Agent Tool Selection
        • Agent Parameter Extraction
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Hallucinations
        • Q&A on Retrieved Data
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Citation
        • User Frustration
        • SQL Generation
    • Code Evaluations
    • Human Annotations
      • Annotate spans
      • Setup labeling queues
  • ๐Ÿ”ญObserve
    • Quickstart: Tracing
    • Tracing
      • Setup Tracing
      • How to: Trace Manually
        • Trace Function Calls
        • Instrument with OpenInference Helpers
        • How to Send to a Specific Project and Space ID
        • Get the Current Span/Context and Tracer
        • Log Prompt Templates & Variables
        • Add Attributes, Metadata and Tags to Span
        • Add Events, Exceptions and Status to Spans
        • Configure OTEL Tracer
        • Create LLM, Retriever and Tool Spans
        • Create Tool Spans
        • Log Input
        • Log Outputs
        • Mask Span Attributes
        • Redact Sensitive Data from Traces
        • Send Traces from Phoenix -> Arize
        • Log as Inferences
        • Advanced Tracing (OTEL) Examples
      • How to: Query Traces
        • Filter Traces
          • Time Filtering
        • Export Traces
        • โœจAI Powered Search & Filter
        • โœจAI Powered Trace Analysis
        • โœจAI Span Analysis & Evaluation
    • Tracing Integrations
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • Hugging Face smolagents
      • Autogen
      • Google GenAI (Gemini)
      • Vertex AI
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • MistralAI
      • Anthropic
      • LangFlow
      • Haystack
      • LiteLLM
      • CrewAI
      • Groq
      • DSPy
      • Guardrails AI
      • Prompt flow
      • Vercel AI SDK
      • Llama
      • Together AI
      • OpenTelemetry (arize-otel)
      • BeeAI
    • Evals on Traces
    • Guardrails
    • Sessions
    • Dashboards
      • Dashboard Widgets
      • Tracking Token Usage
      • โœจCopilot: Dashboard Widget Creation
    • Monitors
      • Integrations: Monitors
        • Slack
          • Manual Setup
        • OpsGenie
        • PagerDuty
      • LLM Red Teaming
    • Custom Metrics & Analytics
      • Arize Query Language Syntax
        • Conditionals and Filters
        • All Operators
        • All Functions
      • Custom Metric Examples
      • โœจCopilot: ArizeQL Generator
  • ๐Ÿ“ˆMachine Learning
    • Machine Learning
      • User Guide: ML
      • Quickstart: ML
      • Concepts: ML
        • What Is A Model Schema
        • Delayed Actuals and Tags
        • ML Glossary
      • How To: ML
        • Upload Data to Arize
          • Pandas SDK Example
          • Local File Upload
            • File Upload FAQ
          • Table Ingestion Tuning
          • Wildcard Paths for Cloud Storage
          • Troubleshoot Data Upload
          • Sending Data FAQ
        • Monitors
          • ML Monitor Types
          • Configure Monitors
            • Notifications Providers
          • Programmatically Create Monitors
          • Best Practices for Monitors
        • Dashboards
          • Dashboard Widgets
          • Dashboard Templates
            • Model Performance
            • Pre-Production Performance
            • Feature Analysis
            • Drift
          • Programmatically Create Dashboards
        • Performance Tracing
          • Time Filtering
          • โœจCopilot: Performance Insights
        • Drift Tracing
          • โœจCopilot: Drift Insights
          • Data Distribution Visualization
          • Embeddings for Tabular Data (Multivariate Drift)
        • Custom Metrics
          • Arize Query Language Syntax
            • Conditionals and Filters
            • All Operators
            • All Functions
          • Custom Metric Examples
          • Custom Metrics Query Language
          • โœจCopilot: ArizeQL Generator
        • Troubleshoot Data Quality
          • โœจCopilot: Data Quality Insights
        • Explainability
          • Interpreting & Analyzing Feature Importance Values
          • SHAP
          • Surrogate Model
          • Explainability FAQ
          • Model Explainability
        • Bias Tracing (Fairness)
        • Export Data to Notebook
        • Automate Model Retraining
        • ML FAQ
      • Use Cases: ML
        • Binary Classification
          • Fraud
          • Insurance
        • Multi-Class Classification
        • Regression
          • Lending
          • Customer Lifetime Value
          • Click-Through Rate
        • Timeseries Forecasting
          • Demand Forecasting
          • Churn Forecasting
        • Ranking
          • Collaborative Filtering
          • Search Ranking
        • Natural Language Processing (NLP)
        • Common Industry Use Cases
      • Integrations: ML
        • Google BigQuery
          • GBQ Views
          • Google BigQuery FAQ
        • Snowflake
          • Snowflake Permissions Configuration
        • Databricks
        • Google Cloud Storage (GCS)
        • Azure Blob Storage
        • AWS S3
          • Private Image Link Access Via AWS S3
        • Kafka
        • Airflow Retrain
        • Amazon EventBridge Retrain
        • MLOps Partners
          • Algorithmia
          • Anyscale
          • Azure & Databricks
          • BentoML
          • CML (DVC)
          • Deepnote
          • Feast
          • Google Cloud ML
          • Hugging Face
          • LangChain ๐Ÿฆœ๐Ÿ”—
          • MLflow
          • Neptune
          • Paperspace
          • PySpark
          • Ray Serve (Anyscale)
          • SageMaker
            • Batch
            • RealTime
            • Notebook Instance with Greater than 20GB of Data
          • Spell
          • UbiOps
          • Weights & Biases
      • API Reference: ML
        • Python SDK
          • Pandas Batch Logging
            • Client
            • log
            • Schema
            • TypedColumns
            • EmbeddingColumnNames
            • ObjectDetectionColumnNames
            • PromptTemplateColumnNames
            • LLMConfigColumnNames
            • LLMRunMetadataColumnNames
            • NLP_Metrics
            • AutoEmbeddings
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
          • Single Record Logging
            • Client
            • log
            • TypedValue
            • Ranking
            • Multi-Class
            • Object Detection
            • Embedding
            • LLMRunMetadata
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
        • Java SDK
          • Constructor
          • log
          • bulkLog
          • logValidationRecords
          • logTrainingRecords
        • R SDK
          • Client$new()
          • Client$log()
        • Rest API
    • Computer Vision
      • How to: CV
        • Generate Embeddings
          • How to Generate Your Own Embedding
          • Let Arize Generate Your Embeddings
        • Embedding & Cluster Analyzer
        • โœจCopilot: Embedding Summarization
        • Similarity Search
        • Embedding Drift
        • Embeddings FAQ
      • Integrations: CV
      • Use Cases: CV
        • Image Classification
        • Image Segmentation
        • Object Detection
      • API Reference: CV
Powered by GitBook

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright ยฉ 2025 Arize AI, Inc

On this page

Was this helpful?

Last updated 9 months ago

Was this helpful?

class ArizeExportClient

The ArizeExportClient class encapsulates the required connection parameters for the Arize exporter.

Note that the ArizeExportClient is available with the Arize SDK >= 7.0.3. You can install this with pip install arize >= 7.0.3

The ArizeExportClient requires an Arize API key. You can get this one of two ways:

  1. Use the export functionality on our Embeddings, Performance Tracing, or Datasets tabs to generate a code snippet. Copying the code snippet will copy your API key as well. Take a look for more details.

  2. Get the API key from the GraphQL explorer. Follow these instructions .

Once you have the API key - you can initialize the client. The client reads the key from one of two places.

By default, the ArizeExportClient looks for the API key from an environment variable called ARIZE_API_KEY.

You can also initialize the ArizeExportClient with the key as one of the arguments:

Reference

#export_model_to_df

This method is invoke on an instance of ArizeExportClient and is the primary method for exporting data from Arize to a Pandas DataFrame.

To use this method, you first need to get your space id and your model id.

Space id:

The easiest way to get your space id is to get it from the URL when you visit the Arize platform. If your url is this:

Your space id is the series of numbers and letters right after /spaces/. In this case, my space id is U3BhY2U6NzU0

Model id:

Your model id is the same as the display name of your model. For example, for our demo fraud model, the model id is arize-demo-fraud-use-case.

Code examples:

By default, actuals are not included in the exporter. In order to include ground truth, include the include_actuals argument:

Export of pre-production data is supported as well. To get pre-production data, set the environment to either Environments.TRAINING or Environments.VALIDATION. Optionally include the model_version and/or batch_id to further refine your export.

Reference:

Argument
Type
Description

Argument
Type
Description
  1. ๐Ÿ“šResources
  2. Export Data API

Python reference

api_key

Optional[str]

Arize provided personal API key associated with your user profile, located on the API Explorer page. API key is required to initiate a new client, it can be passed in explicitly, or set up as an environment variable or in profile file.

host

Optional[str]

URI endpoint host to send your export request to Arize AI.

Defaults to https://flight.arize.com

port

Optional[int]

URI endpoint port to send your export request to Arize AI. Defaults to 443.

https://app.arize.com/organizations/.../spaces/U3BhY2U6NzU0
from arize.exporter import ArizeExportClient
from arize.utils.types import Environments

client = ArizeExportClient()

df = client.export_model_to_df(
    space_id='U3BhY2U6NzU0',
    model_id='arize-demo-fraud-use-case',
    environment=Environments.PRODUCTION,
    start_time=datetime.fromisoformat('2022-05-04T01:10:26.249+00:00'),
    end_time=datetime.fromisoformat('2022-05-04T01:10:27.249+00:00'),
)
from arize.exporter import ArizeExportClient
from arize.utils.types import Environments

client = ArizeExportClient()

df = client.export_model_to_df(
    space_id='U3BhY2U6NzU0',
    model_id='arize-demo-fraud-use-case',
    environment=Environments.PRODUCTION,
    include_actuals=True,
    start_time=datetime.fromisoformat('2022-05-04T01:10:26.249+00:00'),
    end_time=datetime.fromisoformat('2022-05-04T01:10:27.249+00:00'),
)
from arize.exporter import ArizeExportClient
from arize.utils.types import Environments

client = ArizeExportClient()

df = client.export_model_to_df(
    space_id='U3BhY2U6NzU0',
    model_id='arize-demo-fraud-use-case',
    environment=Environments.TRAINING,
    include_actuals=True,
    start_time=datetime.fromisoformat('2022-05-04T01:10:26.249+00:00'),
    end_time=datetime.fromisoformat('2022-05-04T01:10:27.249+00:00'),
    model_version='v1',
    batch_id='test',

)

space_id

str

The id for the space where to export models from, can be retrieved from the url of the Space Overview page in the Arize UI.

model_id

str

The name of the model to export, can be found in the Model Overview tab in the Arize UI.

environment

Environment

The environment for the model to export (can be Production, Training, or Validation). Needs to be of the Environments enum in utils.types.

start_time

datetime

The start time for the data to export for the model, start time is inclusive. Time interval has hourly granularity. Must be a python datetime object.

end_time

datetime

The end time for the data to export for the model, end time is not inclusive. Time interval has hourly granularity. Must a python datetime object.

include_actuals

Optional[bool]

An optional input to indicate whether to include actuals / ground truth in the data to export. include_actuals only applies to the Production environment and defaults to False.

model_version

Optional[str]

An optional input to indicate the version of the model to export. Model versions for all model environments can be found in the Datasets tab on the model page in the Arize UI.

batch_id

Optional[str]

An optional input to indicate the batch name of the model to export. Batches only apply to the Validation environment, and can be found in the Datasets tab on the model page in the Arize UI.

where

Optional[str]

This is a query string that matches the query filter syntax. You can filter on any column and value. "name = 'test' "

  • class ArizeExportClient
  • Reference
  • #export_model_to_df
  • Reference:
import os
from arize.exporter import ArizeExportClient

# Make sure to do this before initializing the client
os.environ['ARIZE_API_KEY'] = <ARIZE_API_KEY>

client = ArizeExportClient()
from arize.exporter import ArizeExportClient

client = ArizeExportClient(api_key=<ARIZE_API_KEY>)
View Source on Github
here
View Source on Github
here