Note that setting a project via an environment variable only works in a notebook and must be done BEFORE instrumentation is initialized. If you are using OpenInference Instrumentation, see the Server tab for how to set the project name in the Resource attributes.
If you are using Phoenix as a collector and running your application separately, you can set the project name in the Resource attributes for the trace provider.
from openinference.semconv.resource import ResourceAttributesfrom openinference.instrumentation.llama_index import LlamaIndexInstrumentorfrom opentelemetry import trace as trace_apifrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk import trace as trace_sdkfrom opentelemetry.sdk.resources import Resourcefrom opentelemetry.sdk.trace.export import SimpleSpanProcessorresource =Resource(attributes={ ResourceAttributes.PROJECT_NAME: '<your-project-name>'})tracer_provider = trace_sdk.TracerProvider(resource=resource)span_exporter =OTLPSpanExporter(endpoint="http://phoenix:6006/v1/traces")span_processor =SimpleSpanProcessor(span_exporter=span_exporter)tracer_provider.add_span_processor(span_processor=span_processor)trace_api.set_tracer_provider(tracer_provider=tracer_provider)# Add any auto-instrumentation you want LlamaIndexInstrumentor().instrument()
Projects work by setting something called the Resource attributes (as seen in the Server example above). The phoenix server uses the project name attribute to group traces into the appropriate project.
Switching projects in a notebook
Typically you want traces for an LLM app to all be grouped in one project. However, while working with Phoenix inside a notebook, we provide a utility to temporarily associate spans with different projects. You can use this to trace things like evaluations.
from phoenix.trace import using_project# Switch project to run evalswithusing_project("my-eval-project"):# all spans created within this context will be associated with# the "my-eval-project" project.# Run evaluations here...
Adding custom metadata to spans
Spans produced by auto-instrumentation can get you very far. However at some point you may want to track metadata - things like account or user info.
With LangChain, you can provide metadata directly via the chain or to to an invocation of a chain.
# Pass metadata into the chainllm =LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})# Pass metadata into the invocationcompletion = llm.predict(adjective="funny", metadata={"variant": "funny"})print(completion)
To add metadata to a span, you will have to use OpenTelemetry's trace_api.