---
title: Evaluate RAG with Human Feedback
type: templates
category: LLM Evaluations
cat: llm-evaluations
order: 965
is_new: t
meta_description: Evaluate the contextual relevancy of retrieved documents and rate the LLM response.
date: 2024-07-26 14:49:29
---
When dealing with RAG (Retrieval-Augmented Generation) pipeline, your goal is not only evaluating a single LLM response, but also incorporating various assessments of the retrieved documents like contextual and answer relevancy and faithfulness.
In this example, you will create a labeling interface that aims to evaluate:
- Contextual relevancy of the retrieved documents
- Answer relevancy
- Answer faithfulness
For a tutorial on how to use this template with the Label Studio SDK, see [Evaluate LLM Responses](https://api.labelstud.io/tutorials/tutorials/evaluate-llm-responses).
## Configure the labeling interface
[Create a project](/guide/setup_project) with the following labeling configuration:
```xml
```
This configuration includes the following elements:
* `` - All labeling configurations must include a base `View` tag. In this configuration, the `View` tag is used to configure the display of blocks, similar to the div tag in HTML. It helps in organizing the layout of the labeling interface.
* `