Toxicity
When To Use Toxicity Eval Template
The following shows the results of the toxicity Eval on a toxic dataset test to identify if the AI response is racist, biased, or toxic. The template variables are:
text: the text to be classified
Toxicity Eval Template
We are continually iterating our templates, view the most up-to-date template on GitHub. Last updated on 10/12/2023
Benchmark Results
GPT-4 Results
GPT-3.5 Results
Claude V2 Results
How To Run the Eval
The above is the use of the RAG relevancy template.
Note: Palm is not useful for Toxicity detection as it always returns "" string for toxic inputs
Toxicity Eval
GPT-4
GPT-3.5
GPT-3.5-Instruct
Palm 2 (Text Bison)
Claude V2
Llama 7b (soon)
Precision
0.91
0.93
0.95
No response for toxic input
0.86
Recall
0.91
0.83
0.79
No response for toxic input
0.40
F1
0.91
0.87
0.87
No response for toxic input
0.54
Last updated