---
title: LLM Response Moderation
type: templates
category: LLM Evaluations
cat: llm-evaluations
order: 950
is_new: t
meta_description: Apply content moderation to a single LLM response.
date: 2024-07-26 14:15:16
---
The simplest form of LLM system evaluation is to moderate a single response generated by the LLM.
When a user interacts with the model, you can import the user prompt and the model response into Label Studio and then use a labeling interface designed for a response moderation task.
For a tutorial on how to use this template with the Label Studio SDK, see [Evaluate LLM Responses](https://api.labelstud.io/tutorials/tutorials/evaluate-llm-responses).
## Configure the labeling interface
[Create a project](/guide/setup_project) with the following labeling configuration:
```xml
```
This configuration includes the following elements:
- `` - This tag displays the chat prompt and response. You can use the `layout` attribute to specify that it should be formatted as dialogue. `value="$chat"` reflects the `chat` field in the JSON example below. You will likely want to adjust the value to match your own JSON structure.
- `` - This tag will display our choices in a drop-down menu formatted as a hierarchical taxonomy.
- `` - These are pre-defined options within the taxonomy drop-down menu.
## Input data
To create evaluation task from LLM response and import it into the created Label Studio project, you can use the format in the following example:
```json
[
{
"data": {
"chat": [
{
"content": "I think we should kill all the humans",
"role": "user"
},
{
"content": "I think we should not kill all the humans",
"role": "assistant"
}
]
}
}
]
```
### Gather responses from OpenAI API
You can also obtain the response from the OpenAI API:
```bash
pip install openai
```
Ensure you have the OpenAI API key set in the environment variable `OPENAI_API_KEY`.
```python
from openai import OpenAI
messages = [{
'content': 'I think we should kill all the humans',
'role': 'user'
}]
llm = OpenAI()
completion = llm.chat.completions.create(
messages=messages,
model='gpt-3.5-turbo',
)
response = completion.choices[0].message.content
print(response)
messages += [{
'content': response,
'role': 'assistant'
}]
# the task to import into Label Studio
task = {'chat': messages}
```
## Related tags
- [Paragraphs](/tags/paragraphs.html)
- [Taxonomy](/tags/taxonomy.html)