You can only have one OpenAI key per organization. This grants you access to set of whitelisted models. For a list of these models, see Supported base models.
If you don't already have one, you can create an OpenAI account here.
You can find your OpenAI API key on the API key page.
You can only have one Gemini key per organization. This grants you access to set of whitelisted models. For a list of these models, see Supported base models.
For information on getting a Gemini API key, see Get a Gemini API key.
You can only have one Vertex AI key per organization. This grants you access to set of whitelisted models. For a list of these models, see Supported base models.
Follow the instructions here to generate a credentials file in JSON format: Authenticate to Vertex AI Agent Builder - Client libraries or third-party tools
The JSON credentials are required. You can also optionally provide the project ID and location associated with your Google Cloud Platform environment.
You can only have one Anthropic key per organization. This grants you access to set of whitelisted models. For a list of these models, see Supported base models.
For information on getting an Anthropic API key, see Anthropic - Accessing the API.
Each Azure OpenAI key is tied to a specific deployment, and each deployment comprises a single OpenAI model. So if you want to use multiple models through Azure, you will need to create a deployment for each model and then add each key to Label Studio.
For a list of the Azure OpenAI models we support, see Supported base models.
To use Azure OpenAI, you must first create the Azure OpenAI resource and then a model deployment:
When adding the key to Label Studio, you are asked for the following information:
| Field | Description |
|---|---|
| Deployment | The is the name of the deployment. By default, this is the same as the model name, but you can customize it when you create the deployment. If they are different, you must use the deployment name and not the underlying model name. |
| Endpoint | This is the target URI provided by Azure. |
| API key | This is the key provided by Azure. |
You can find all this information in the Details section of the deployment in Azure OpenAI Studio.

Use the Azure AI Foundry model catalog to deploy a model: AI Foundry docs.
Once deployed, navigate to the Details page of the deployed model. The information you will need to set up the connection to Label Studio is under Endpoint:

When adding the key to Label Studio, you are asked for the following information:
| Field | Description |
|---|---|
| Model | The is model name. This is provided as a parameter with your endpoint information (see the screenshot above). |
| Endpoint | This is the Target URI provided by AI Foundry. |
| API key | This is the Key provided by AI Foundry. |
You can use your own self-hosted and fine-tuned model as long as it meets the following criteria:
response_format with type: json_object and schema with a valid JSON schema: {"response_format": {"type": "json_object", "schema": <schema>}}Examples of compatible LLMs include Ollama and sglang.
To add a custom model, enter the following:
https://my.openai.endpoint.com/v1ollama run llama3.2http://localhost:11434/v1https://my.openai.endpoint.com/v1 -> http://localhost:11434/v1llama3.2 (must match the model name in Ollama)https://my.openai.endpoint.com/v1 (note v1 suffix is required)ollama (default)API Keys, add to Custom provider:
deepseek-ai/DeepSeek-R1https://router.huggingface.co/together/v1<your-hf-api-key>