LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • Arize AI
  • Quickstarts
  • โœจArize Copilot
  • Arize AI for Agents
  • Concepts
    • Agent Evaluation
    • Tracing
      • What is OpenTelemetry?
      • What is OpenInference?
      • Openinference Semantic Conventions
    • Evaluation
  • ๐ŸงชDevelop
    • Quickstart: Experiments
    • Datasets
      • Create a dataset
      • Update a dataset
      • Export a dataset
    • Experiments
      • Run experiments
      • Run experiments with code
        • Experiments SDK differences in AX vs Phoenix
        • Log experiment results via SDK
      • Evaluate experiments
      • Evaluate experiment with code
      • CI/CD with experiments
        • Github Action Basics
        • Gitlab CI/CD Basics
      • Download experiment
    • Prompt Playground
      • Use tool calling
      • Use image inputs
      • Replay spans
      • Compare prompts side-by-side
      • Load a dataset into playground
      • Save playground outputs as an experiment
      • โœจCopilot: prompt builder
    • Playground Integrations
      • OpenAI
      • Azure OpenAI
      • AWS Bedrock
      • VertexAI
      • Custom LLM Models
    • Prompt Hub
  • ๐Ÿง Evaluate
    • Online Evals
      • Run evaluations in the UI
      • Run evaluations with code
      • Test LLM evaluator in playground
      • View task details & logs
      • โœจCopilot: Eval Builder
      • โœจCopilot: Eval Analysis
      • โœจCopilot: RAG Analysis
    • Experiment Evals
    • LLM as a Judge
      • Custom Eval Templates
      • Arize Templates
        • Agent Tool Calling
        • Agent Tool Selection
        • Agent Parameter Extraction
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Hallucinations
        • Q&A on Retrieved Data
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Citation
        • User Frustration
        • SQL Generation
    • Code Evaluations
    • Human Annotations
  • ๐Ÿ”ญObserve
    • Quickstart: Tracing
    • Tracing
      • Setup tracing
      • Trace manually
        • Trace inputs and outputs
        • Trace function calls
        • Trace LLM, Retriever and Tool Spans
        • Trace prompt templates & variables
        • Trace as Inferences
        • Send Traces from Phoenix -> Arize
        • Advanced Tracing (OTEL) Examples
      • Add metadata
        • Add events, exceptions and status
        • Logging Latent Metadata
        • Add attributes, metadata and tags
        • Send data to a specific project
        • Get the current span context and tracer
      • Configure tracing options
        • Configure OTEL tracer
        • Mask span attributes
        • Redact sensitive data from traces
        • Instrument with OpenInference helpers
      • Query traces
        • Filter Traces
          • Time Filtering
        • Export Traces
        • โœจAI Powered Search & Filter
        • โœจAI Powered Trace Analysis
        • โœจAI Span Analysis & Evaluation
    • Tracing Integrations
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • Hugging Face smolagents
      • Autogen
      • Google GenAI (Gemini)
      • Model Context Protocol (MCP)
      • Vertex AI
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • MistralAI
      • Anthropic
      • LangFlow
      • Haystack
      • LiteLLM
      • CrewAI
      • Groq
      • DSPy
      • Guardrails AI
      • Prompt flow
      • Vercel AI SDK
      • Llama
      • Together AI
      • OpenTelemetry (arize-otel)
      • BeeAI
    • Evals on Traces
    • Guardrails
    • Sessions
    • Dashboards
      • Dashboard Widgets
      • Tracking Token Usage
      • โœจCopilot: Dashboard Widget Creation
    • Monitors
      • Integrations: Monitors
        • Slack
          • Manual Setup
        • OpsGenie
        • PagerDuty
      • LLM Red Teaming
    • Custom Metrics & Analytics
      • Arize Query Language Syntax
        • Conditionals and Filters
        • All Operators
        • All Functions
      • Custom Metric Examples
      • โœจCopilot: ArizeQL Generator
  • ๐Ÿ“ˆMachine Learning
    • Machine Learning
      • User Guide: ML
      • Quickstart: ML
      • Concepts: ML
        • What Is A Model Schema
        • Delayed Actuals and Tags
        • ML Glossary
      • How To: ML
        • Upload Data to Arize
          • Pandas SDK Example
          • Local File Upload
            • File Upload FAQ
          • Table Ingestion Tuning
          • Wildcard Paths for Cloud Storage
          • Troubleshoot Data Upload
          • Sending Data FAQ
        • Monitors
          • ML Monitor Types
          • Configure Monitors
            • Notifications Providers
          • Programmatically Create Monitors
          • Best Practices for Monitors
        • Dashboards
          • Dashboard Widgets
          • Dashboard Templates
            • Model Performance
            • Pre-Production Performance
            • Feature Analysis
            • Drift
          • Programmatically Create Dashboards
        • Performance Tracing
          • Time Filtering
          • โœจCopilot: Performance Insights
        • Drift Tracing
          • โœจCopilot: Drift Insights
          • Data Distribution Visualization
          • Embeddings for Tabular Data (Multivariate Drift)
        • Custom Metrics
          • Arize Query Language Syntax
            • Conditionals and Filters
            • All Operators
            • All Functions
          • Custom Metric Examples
          • Custom Metrics Query Language
          • โœจCopilot: ArizeQL Generator
        • Troubleshoot Data Quality
          • โœจCopilot: Data Quality Insights
        • Explainability
          • Interpreting & Analyzing Feature Importance Values
          • SHAP
          • Surrogate Model
          • Explainability FAQ
          • Model Explainability
        • Bias Tracing (Fairness)
        • Export Data to Notebook
        • Automate Model Retraining
        • ML FAQ
      • Use Cases: ML
        • Binary Classification
          • Fraud
          • Insurance
        • Multi-Class Classification
        • Regression
          • Lending
          • Customer Lifetime Value
          • Click-Through Rate
        • Timeseries Forecasting
          • Demand Forecasting
          • Churn Forecasting
        • Ranking
          • Collaborative Filtering
          • Search Ranking
        • Natural Language Processing (NLP)
        • Common Industry Use Cases
      • Integrations: ML
        • Google BigQuery
          • GBQ Views
          • Google BigQuery FAQ
        • Snowflake
          • Snowflake Permissions Configuration
        • Databricks
        • Google Cloud Storage (GCS)
        • Azure Blob Storage
        • AWS S3
          • Private Image Link Access Via AWS S3
        • Kafka
        • Airflow Retrain
        • Amazon EventBridge Retrain
        • MLOps Partners
          • Algorithmia
          • Anyscale
          • Azure & Databricks
          • BentoML
          • CML (DVC)
          • Deepnote
          • Feast
          • Google Cloud ML
          • Hugging Face
          • LangChain ๐Ÿฆœ๐Ÿ”—
          • MLflow
          • Neptune
          • Paperspace
          • PySpark
          • Ray Serve (Anyscale)
          • SageMaker
            • Batch
            • RealTime
            • Notebook Instance with Greater than 20GB of Data
          • Spell
          • UbiOps
          • Weights & Biases
      • API Reference: ML
        • Python SDK
          • Pandas Batch Logging
            • Client
            • log
            • Schema
            • TypedColumns
            • EmbeddingColumnNames
            • ObjectDetectionColumnNames
            • PromptTemplateColumnNames
            • LLMConfigColumnNames
            • LLMRunMetadataColumnNames
            • NLP_Metrics
            • AutoEmbeddings
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
          • Single Record Logging
            • Client
            • log
            • TypedValue
            • Ranking
            • Multi-Class
            • Object Detection
            • Embedding
            • LLMRunMetadata
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
        • Java SDK
          • Constructor
          • log
          • bulkLog
          • logValidationRecords
          • logTrainingRecords
        • R SDK
          • Client$new()
          • Client$log()
        • Rest API
    • Computer Vision
      • How to: CV
        • Generate Embeddings
          • How to Generate Your Own Embedding
          • Let Arize Generate Your Embeddings
        • Embedding & Cluster Analyzer
        • โœจCopilot: Embedding Summarization
        • Similarity Search
        • Embedding Drift
        • Embeddings FAQ
      • Integrations: CV
      • Use Cases: CV
        • Image Classification
        • Image Segmentation
        • Object Detection
      • API Reference: CV
Powered by GitBook

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright ยฉ 2025 Arize AI, Inc

On this page
  • What are Tabular Embeddings?
  • Why is it useful?
  • Steps to Generate Tabular Embeddings
  • Viewing your tabular embeddings

Was this helpful?

  1. Machine Learning
  2. Machine Learning
  3. How To: ML
  4. Drift Tracing

Embeddings for Tabular Data (Multivariate Drift)

Use Embeddings to Catch Multivariate Drift in Tabular Data

Last updated 1 year ago

Was this helpful?

Check out an example of how to create tabular embeddings !

What are Tabular Embeddings?

Tabular Embeddings are embeddings that are generated from a row of tabular data. Each row of your dataframe will be represented by 1 embedding vector.

Why is it useful?

We can use our embeddings from tabular data to monitor multivariate drift. Multivariate drift catches drift across combinations of multiple features that may not be present when looking at a single feature level.

Conceptual Example: There's an abnormal increase of tall people with small shoe size, but not obvious just looking at increase in average height or decrease in shoe size.

Steps to Generate Tabular Embeddings

  1. Select the columns in your data that you want to convert to embeddings. If you're not sure which columns would work best, start with using all of your feature and prediction columns. In addition, we suggest not selecting columns that contain incomprehensible strings, e.g., hashed fields, user ids, etc.

Example Row:

age (feature 1)
state (feature 2)
credit_score

10

"CA"

560

  1. (Optional) You can also provide a dictionary mapping your column names to more verbose versions of them. This helps the embedding generator understand what each column means, in case the dataframe has column names that are not found in the vocabulary. For example: delinq_6mnths can be mapped to delinquencies_in_the_last_6_months. This won't change the column names of your dataframe.

  2. Choose a model type for generating embeddings. Read about supported models here. In this example, we've chosen distilbert-base-uncased for performance and simplicity.

  3. Generate the embedding and assign it to a new column in your dataframe. In this example we named it "tabular_embedding_vector".

from arize.pandas.embeddings import EmbeddingGenerator, UseCases

# Instantiate the embeddding generator
generator = EmbeddingGeneratorForTabularFeatures(
    model_name="distilbert-base-uncased",
    tokenizer_max_length=512
)

# Select the columns from your dataframe to consider
selected_cols = [...]

# (Optional) Provide a mapping for more verbose column names
column_name_map = {...: ...}

# Generate tabular embeddings and assign them to a new column
df["tabular_embedding_vector"] = generator.generate_embeddings(
    df,
    selected_columns=selected_cols,
    col_name_map=column_name_map # (OPTIONAL, can remove)
)

# Create embedding features dictionary
tabular_embedding_features = {
    # Dictionary keys will be name of embedding feature in the app
    "arize_tabular_embedding": EmbeddingColumnNames(
        vector_column_name="tabular_embedding_vector",
    ),
}

# Log to Arize using the Arize pandas logger
response = arize_client.log(
    dataframe=df,
    model_id="tabular-model-with-embeddings",
    model_version="1.0",
    model_type=ModelTypes.REGRESSION,
    metrics_validation=[Metrics.REGRESSION],
    environment=Environments.PRODUCTION,
    schema = Schema(
        prediction_id_column_name="prediction_id",
        timestamp_column_name="prediction_ts",
        prediction_label_column_name="prediction_label",
        actual_label_column_name="actual_label",
        feature_column_names=feature_cols,
        embedding_feature_column_names=tabular_embedding_features,
    )
)

Viewing your tabular embeddings

Once your embedding is logged to Arize, you can monitor for multivariate drift. To learn more about Embedding Drift, visit here.

Click on a point on the drift over time graph above to visualize data points using UMAP. To learn more about UMAP, visit here.

Visual for how to generate Tabular Embeddings

Log the whole dataframe to Arize. This means Arize will receive your data in both tabular and embedding formatting, which will assist in debugging and analysis in the platform. An example is presented below but refer to our for a complete list of attributes.

๐Ÿ“ˆ
SDK documentation
here
Google Colaboratory
Google Colab for Generating Embeddings from Tabular Data
Logo
Monitoring an Embedding Generated from Tabular Data