Arize AI
Search…
Arize AI
Arize AI
What is ML Observability?
Examples
Getting Started with Arize
Common Model Types
Logging to Arize Tutorials
Explainability Tutorials
Embedding Examples (NLP)
Benchmarks
Integrations with ML Platforms
Common Use Cases
Glossary
User Guides
Sign Up / Log in
Quickstart
1. Setting Up Your Account
2. Sending Data
3. Set a Model Baseline
4. Set up Model Monitors
5. Performance Tracing
6. Troubleshoot Drift
7. Troubleshoot Embedding Data
8. Model Explainability
9. Set up a Dashboard
10. Troubleshoot Data Consistency
11. Bias Tracing (Fairness)
Advanced
Product FAQ
Data Ingestion
Overview
Model Schema
API Reference
File Importer - Cloud Storage
Data Ingestion FAQ
Integrations
Monitoring Integrations
ML Platforms
GraphQL API
SSO
On-Premise Deployment
Overview
Requirements
Installation
Homepage
Product Release Notes
Powered By GitBook
Benchmarks
Benchmarking the Arize SDK

Benchmark tests of the Arize Python SDK

The ability to ingest data with low latency is important, here is a benchmarking colab that demonstrates the efficiency which Arize uploads data from a Python environment
Sending 10 Million Inferences to Arize in 90 Seconds
​Colab Link​
Previous
Embedding Examples (NLP)
Next
Integrations with ML Platforms
Last modified 1mo ago
Copy link