Masking Span Attributes
In some situations, you may need to modify the observability level of your tracing. For instance, you may want to keep sensitive information from being logged for security reasons, or you may want to limit the size of the base64 encoded images logged to reduced payload size.
The OpenInference Specification defines a set of environment variables you can configure to suit your observability needs. In addition, the OpenInference auto-instrumentors accept a trace config which allows you to set these value in code without having to set environment variables, if that's what you prefer
The possible settings are:
OPENINFERENCE_HIDE_INPUTS
Hides input value, all input messages & embedding input text
bool
False
OPENINFERENCE_HIDE_OUTPUTS
Hides output value & all output messages
bool
False
OPENINFERENCE_HIDE_INPUT_MESSAGES
Hides all input messages & embedding input text
bool
False
OPENINFERENCE_HIDE_OUTPUT_MESSAGES
Hides all output messages
bool
False
PENINFERENCE_HIDE_INPUT_IMAGES
Hides images from input messages
bool
False
OPENINFERENCE_HIDE_INPUT_TEXT
Hides text from input messages & input embeddings
bool
False
OPENINFERENCE_HIDE_OUTPUT_TEXT
Hides text from output messages
bool
False
OPENINFERENCE_HIDE_EMBEDDING_VECTORS
Hides returned embedding vectors
bool
False
OPENINFERENCE_HIDE_LLM_INVOCATION_PARAMETERS
Hides LLM invocation parameters
bool
False
OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH
Limits characters of a base64 encoding of an image
int
32,000
To set up this configuration you can either:
Set environment variables as specified above
Define the configuration in code as shown below
Do nothing and fall back to the default values
Use a combination of the three, the order of precedence is:
Values set in the
TraceConfig
in codeEnvironment variables
default values
Below is an example of how to set these values in code using our OpenAI Python and JavaScript instrumentors, however, the config is respected by all of our auto-instrumentors.
Last updated