Skip to main content

How to Self-Host an Open-Source Model

For many cases, either Gobi will have a built-in provider or the API you use will be OpenAI-compatible, in which case you can use the โ€œopenaiโ€ provider and change the โ€œbaseUrlโ€ to point to the server. However, if neither of these are the case, you will need to wire up a new LLM object.

How to Set Up Authentication

Basic authentication can be done with any provider using the apiKey field:
  • YAML
  • JSON
config.yaml
models:
  - name: Ollama
    provider: ollama
    model: llama2-7b
    apiKey: <YOUR_CUSTOM_OLLAMA_SERVER_API_KEY>
config.json
{
  "models": [
    {
      "title": "Ollama",
      "provider": "ollama",
      "model": "llama2-7b",
      "apiKey": "<YOUR_CUSTOM_OLLAMA_SERVER_API_KEY>"
    }
  ]
}
This translates to the header "Authorization": "Bearer xxx". If you need to send custom headers for authentication, you may use the requestOptions.headers property like in this example with Ollama:
  • YAML
  • JSON
config.yaml
models:
  - name: Ollama
    provider: ollama
    model: llama2-7b
    requestOptions:
      headers:
        X-Auth-Token: xxx
config.json
{
  "models": [
    {
      "title": "Ollama",
      "provider": "ollama",
      "model": "llama2-7b",
      "requestOptions": { "headers": { "X-Auth-Token": "xxx" } }
    }
  ]
}
Similarly if your model requires a Certificate for authentication, you may use the requestOptions.clientCertificate property like in the example below:
  • YAML
  • JSON
config.yaml
models:
  - name: Ollama
    provider: ollama
    model: llama2-7b
    requestOptions:
      clientCertificate:
        cert: C:\tempollama.pem
        key: C:\tempollama.key
        passphrase: c0nt!nu3
config.json
{
  "models": [
    {
      "title": "Ollama",
      "provider": "ollama",
      "model": "llama2-7b",
      "requestOptions": {
        "clientCertificate": {
          "cert": "C:\\tempollama.pem",
          "key": "C:\\tempollama.key",
          "passphrase": "c0nt!nu3"
        }
      }
    }
  ]
}